Envisioning the Future, One Step at a Time
2026-04-10 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors propose a new way to predict how complex scenes will change by focusing on the paths of key points rather than trying to predict every pixel in a video. Their model uses a step-by-step process that handles uncertainty as it predicts many possible futures quickly and realistically from just one image. They also create a new benchmark called OWM to test how well models predict different future movements in real-world videos. Their approach is faster and as accurate as more detailed methods, making it easier to explore lots of possible future scenarios.
scene dynamicstrajectory predictionautoregressive diffusion modeluncertainty modelingmulti-modal motionfuture predictionbenchmarkopen-setsampling speedphysical plausibility
Authors
Stefan Andreas Baumann, Jannik Wiese, Tommaso Martorella, Mahdi M. Kalayeh, Björn Ommer
Abstract
Accurately anticipating how complex, diverse scenes will evolve requires models that represent uncertainty, simulate along extended interaction chains, and efficiently explore many plausible futures. Yet most existing approaches rely on dense video or latent-space prediction, expending substantial capacity on dense appearance rather than on the underlying sparse trajectories of points in the scene. This makes large-scale exploration of future hypotheses costly and limits performance when long-horizon, multi-modal motion is essential. We address this by formulating the prediction of open-set future scene dynamics as step-wise inference over sparse point trajectories. Our autoregressive diffusion model advances these trajectories through short, locally predictable transitions, explicitly modeling the growth of uncertainty over time. This dynamics-centric representation enables fast rollout of thousands of diverse futures from a single image, optionally guided by initial constraints on motion, while maintaining physical plausibility and long-range coherence. We further introduce OWM, a benchmark for open-set motion prediction based on diverse in-the-wild videos, to evaluate accuracy and variability of predicted trajectory distributions under real-world uncertainty. Our method matches or surpasses dense simulators in predictive accuracy while achieving orders-of-magnitude higher sampling speed, making open-set future prediction both scalable and practical. Project page: http://compvis.github.io/myriad.