Tri-Prompting: Video Diffusion with Unified Control over Scene, Subject, and Motion

2026-03-16Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors present Tri-Prompting, a new method that improves video creation by combining three important controls: arranging scenes, keeping a subject’s appearance consistent from multiple views, and adjusting camera or object movement. Unlike previous methods that handle these separately, their approach uses 3D tracking points and color cues to manage both background and foreground together. They also introduce a technique to balance control and visual quality during video creation. Their experiments show better results than earlier specialized systems in keeping subjects looking the same from different angles and accurately reflecting motion.

video diffusion modelsscene compositionmulti-view consistencysubject customizationcamera poseobject motion3D trackingControlNetinference schedulevideo synthesis
Authors
Zhenghong Zhou, Xiaohang Zhan, Zhiqin Chen, Soo Ye Kim, Nanxuan Zhao, Haitian Zheng, Qing Liu, He Zhang, Zhe Lin, Yuqian Zhou, Jiebo Luo
Abstract
Recent video diffusion models have made remarkable strides in visual quality, yet precise, fine-grained control remains a key bottleneck that limits practical customizability for content creation. For AI video creators, three forms of control are crucial: (i) scene composition, (ii) multi-view consistent subject customization, and (iii) camera-pose or object-motion adjustment. Existing methods typically handle these dimensions in isolation, with limited support for multi-view subject synthesis and identity preservation under arbitrary pose changes. This lack of a unified architecture makes it difficult to support versatile, jointly controllable video. We introduce Tri-Prompting, a unified framework and two-stage training paradigm that integrates scene composition, multi-view subject consistency, and motion control. Our approach leverages a dual-condition motion module driven by 3D tracking points for background scenes and downsampled RGB cues for foreground subjects. To ensure a balance between controllability and visual realism, we further propose an inference ControlNet scale schedule. Tri-Prompting supports novel workflows, including 3D-aware subject insertion into any scenes and manipulation of existing subjects in an image. Experimental results demonstrate that Tri-Prompting significantly outperforms specialized baselines such as Phantom and DaS in multi-view subject identity, 3D consistency, and motion accuracy.