Universal Skeleton Understanding via Differentiable Rendering and MLLMs

2026-03-18Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created SkeletonLLM, a system that helps language models understand human skeleton movements by turning them into images the model can read. They built a tool called DrAction that converts different skeleton data into simple image sequences, allowing the model to learn from these visuals directly. They also used special training methods to help the model think step-by-step and better tell apart similar actions. This approach lets the language model work well with skeleton data, even though it wasn’t originally designed for it.

Multimodal Large Language ModelsSkeleton DataVisual ModalityDifferentiable RenderingKinematicsCausal Reasoning DistillationDiscriminative FinetuningCross-format TransferAction RecognitionCaptioning
Authors
Ziyi Wang, Peiming Li, Xinshun Wang, Yang Tang, Kai-Kuang Ma, Mengyuan Liu
Abstract
Multimodal large language models (MLLMs) exhibit strong visual-language reasoning, yet remain confined to their native modalities and cannot directly process structured, non-visual data such as human skeletons. Existing methods either compress skeleton dynamics into lossy feature vectors for text alignment, or quantize motion into discrete tokens that generalize poorly across heterogeneous skeleton formats. We present SkeletonLLM, which achieves universal skeleton understanding by translating arbitrary skeleton sequences into the MLLM's native visual modality. At its core is DrAction, a differentiable, format-agnostic renderer that converts skeletal kinematics into compact image sequences. Because the pipeline is end-to-end differentiable, MLLM gradients can directly guide the rendering to produce task-informative visual tokens. To further enhance reasoning capabilities, we introduce a cooperative training strategy: Causal Reasoning Distillation transfers structured, step-by-step reasoning from a teacher model, while Discriminative Finetuning sharpens decision boundaries between confusable actions. SkeletonLLM demonstrates strong generalization on diverse tasks including recognition, captioning, reasoning, and cross-format transfer -- suggesting a viable path for applying MLLMs to non-native modalities. Code will be released upon acceptance.