MoRight: Motion Control Done Right

2026-04-08Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceGraphicsMachine LearningRobotics
AI summary

The authors present MoRight, a system that helps make videos where you can control how things move and from which angle you watch. Unlike previous methods, MoRight keeps object movement and camera movement separate, so you can adjust each independently. It also learns how moving one object causes other objects to move, making the actions look more natural. The system can work forwards (predict what happens next) or backwards (guess what caused what you see). Tests show it does a better job than earlier approaches at creating high-quality, controllable videos with realistic interactions.

motion controlcamera viewpointdisentangled representationtemporal cross-view attentionmotion causalitykinematicsvideo generationforward reasoninginverse reasoningphysically plausible dynamics
Authors
Shaowei Liu, Xuanchi Ren, Tianchang Shen, Huan Ling, Saurabh Gupta, Shenlong Wang, Sanja Fidler, Jun Gao
Abstract
Generating motion-controlled videos--where user-specified actions drive physically plausible scene dynamics under freely chosen viewpoints--demands two capabilities: (1) disentangled motion control, allowing users to separately control the object motion and adjust camera viewpoint; and (2) motion causality, ensuring that user-driven actions trigger coherent reactions from other objects rather than merely displacing pixels. Existing methods fall short on both fronts: they entangle camera and object motion into a single tracking signal and treat motion as kinematic displacement without modeling causal relationships between object motion. We introduce MoRight, a unified framework that addresses both limitations through disentangled motion modeling. Object motion is specified in a canonical static-view and transferred to an arbitrary target camera viewpoint via temporal cross-view attention, enabling disentangled camera and object control. We further decompose motion into active (user-driven) and passive (consequence) components, training the model to learn motion causality from data. At inference, users can either supply active motion and MoRight predicts consequences (forward reasoning), or specify desired passive outcomes and MoRight recovers plausible driving actions (inverse reasoning), all while freely adjusting the camera viewpoint. Experiments on three benchmarks demonstrate state-of-the-art performance in generation quality, motion controllability, and interaction awareness.