PhyCo: Learning Controllable Physical Priors for Generative Motion

2026-04-30Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceMachine Learning
AI summary

The authors created PhyCo, a system that helps video generation models make videos where objects behave more realistically, like not sliding around or bouncing correctly. They made a big set of simulation videos showing different physical effects, then fine-tuned a model to understand these effects using special maps and a vision-language model that checks physics in the videos. This lets the generator create videos that better follow physical rules without needing complex calculations when making the video. Tests showed PhyCo makes videos that look more physically real and lets users control physical aspects more clearly than before.

video diffusion modelsphysical consistencyControlNetphysics simulationvision-language modelfine-tuningphysical property mapsreinforcement learningPhysics-IQ benchmarkgenerative video models
Authors
Sriram Narayanan, Ziyu Jiang, Srinivasa Narasimhan, Manmohan Chandraker
Abstract
Modern video diffusion models excel at appearance synthesis but still struggle with physical consistency: objects drift, collisions lack realistic rebound, and material responses seldom match their underlying properties. We present PhyCo, a framework that introduces continuous, interpretable, and physically grounded control into video generation. Our approach integrates three key components: (i) a large-scale dataset of over 100K photorealistic simulation videos where friction, restitution, deformation, and force are systematically varied across diverse scenarios; (ii) physics-supervised fine-tuning of a pretrained diffusion model using a ControlNet conditioned on pixel-aligned physical property maps; and (iii) VLM-guided reward optimization, where a fine-tuned vision-language model evaluates generated videos with targeted physics queries and provides differentiable feedback. This combination enables a generative model to produce physically consistent and controllable outputs through variations in physical attributes-without any simulator or geometry reconstruction at inference. On the Physics-IQ benchmark, PhyCo significantly improves physical realism over strong baselines, and human studies confirm clearer and more faithful control over physical attributes. Our results demonstrate a scalable path toward physically consistent, controllable generative video models that generalize beyond synthetic training environments.