World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning

2026-04-29Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors address the challenge that vision-language models (VLMs) have trouble imagining how scenes change when the viewpoint moves. Instead of relying on costly computations or simplistic synthetic data, they propose World2VLM, a method that teaches VLMs using a world model that generates consistent future views of a scene based on camera movement. By training VLMs this way, the models learn spatial reasoning more effectively and perform better on various benchmarks without needing heavy computations at test time. This shows that world models can help VLMs learn during training, not just during inference.

Vision-language modelsSpatial reasoningEgocentric motionWorld modelsSpatial imaginationForward reasoningInverse reasoningView synthesisTraining distillationInference efficiency
Authors
Wanyue Zhang, Wenxiang Wu, Wang Xu, Jiaxin Luo, Helu Zhi, Yibin Huang, Shuo Ren, Zitao Liu, Jiajun Zhang
Abstract
Vision-language models (VLMs) have shown strong performance on static visual understanding, yet they still struggle with dynamic spatial reasoning that requires imagining how scenes evolve under egocentric motion. Recent efforts address this limitation either by scaling spatial supervision with synthetic data or by coupling VLMs with world models at inference time. However, the former often lacks explicit modeling of motion-conditioned state transitions, while the latter incurs substantial computational overhead. In this work, we propose World2VLM, a training framework that distills spatial imagination from a generative world model into a vision-language model. Given an initial observation and a parameterized camera trajectory, we use a view-consistent world model to synthesize geometrically aligned future views and derive structured supervision for both forward (action-to-outcome) and inverse (outcome-to-action) spatial reasoning. We post-train the VLM with a two-stage recipe on a compact dataset generated by this pipeline and evaluate it on multiple spatial reasoning benchmarks. World2VLM delivers consistent improvements over the base model across diverse benchmarks, including SAT-Real, SAT-Synthesized, VSI-Bench, and MindCube. It also outperforms the test-time world-model-coupled methods while eliminating the need for expensive inference-time generation. Our results suggest that world models can serve not only as inference-time tools, but also as effective training-time teachers, enabling VLMs to internalize spatial imagination in a scalable and efficient manner.