HarmoWAM: Harmonizing Generalizable and Precise Manipulation via Adaptive World Action Models
2026-05-11 • Robotics
Robotics
AI summaryⓘ
The authors studied two main ways robots handle tasks using models of the world: one focuses on imagining what will happen to plan actions generally, but isn't very precise; the other focuses on tightly linking actions and visuals, which is precise but limited to known situations. To get the best of both, they created HarmoWAM, a system that combines these approaches by using a world model to guide two different action strategies and a smart switch between them. This allows the robot to both adapt to new settings and perform detailed tasks accurately. In tests with new environments, their method worked much better than previous models without extra training.
World Action ModelsInverse DynamicsVideo PredictionSpatio-temporal PriorsLatent DynamicsPredictive ControlReactive ControlZero-shot GeneralizationRobotic ManipulationGating Mechanism
Authors
Qiuxuan Feng, Jiale Yu, Jiaming Liu, Yueru Jia, Zhuangzhe Wu, Hao Chen, Zezhong Qian, Shuo Gu, Peng Jia, Siwei Ma, Shanghang Zhang
Abstract
World Action Models (WAMs) have emerged as a promising paradigm for robot control by modeling physical dynamics. Current WAMs generally follow two paradigms: the "Imagine-then-Execute" approach, which uses video prediction to infer actions via inverse dynamics, and the "Joint Modeling" approach, which jointly models actions and video representations. Based on systematic experiments, we observe a fundamental trade-off between these paradigms: the former explicitly leverages world models for generalizable transit but lacks interaction precision, whereas the latter enables fine-grained, temporally coherent action generation but is constrained by the exploration space of the training distribution. Motivated by these findings, we propose HarmoWAM, an end-to-end WAM that fully leverages a world model to unify predictive and reactive control, enabling both generalizable transit and precise manipulation. Specifically, the world model provides spatio-temporal physical priors that condition two complementary action experts: a predictive expert that leverages latent dynamics for iterative action generation, and a reactive expert that directly infers actions from predicted visual evolution. To enable adaptive coordination, a Process-Adaptive Gating Mechanism is proposed to automatically determine the timing and location of switching between them. This allows the world model to drive the reactive expert to expand the exploration space and the predictive expert to perform precise interactions across different stages of a task. For evaluation, we construct three training-unseen test environments across six real-world robotic tasks, covering variations in background, position, and object semantics. Notably, HarmoWAM achieves strong zero-shot generalization across these scenarios, significantly outperforming prior state-of-the-art VLA models and WAMs by margins of 33% and 29%, respectively.