Towards Spatio-Temporal World Scene Graph Generation from Monocular Videos
2026-03-13 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors introduce a new dataset called ActionGenome4D that upgrades existing videos into 3D scenes where objects are tracked even when hidden or out of view. They define a task called World Scene Graph Generation (WSGG) to build a complete map of all objects and their interactions over time, including those not currently visible. To solve this, they propose three methods that help reason about hidden objects using different memory and attention strategies. They also test existing Vision-Language Models on this task to provide baseline results. Overall, their work helps improve how computers understand dynamic scenes by considering persistent and 3D object interactions.
Spatio-temporal scene graph3D reconstructionObject permanenceWorld Scene Graph GenerationMasked completionTemporal attentionVision-Language ModelsGraph retrieval-augmented generationOcclusion reasoningScene understanding
Authors
Rohith Peddi, Saurabh, Shravan Shanmugam, Likhitha Pallapothula, Yu Xiang, Parag Singla, Vibhav Gogate
Abstract
Spatio-temporal scene graphs provide a principled representation for modeling evolving object interactions, yet existing methods remain fundamentally frame-centric: they reason only about currently visible objects, discard entities upon occlusion, and operate in 2D. To address this, we first introduce ActionGenome4D, a dataset that upgrades Action Genome videos into 4D scenes via feed-forward 3D reconstruction, world-frame oriented bounding boxes for every object involved in actions, and dense relationship annotations including for objects that are temporarily unobserved due to occlusion or camera motion. Building on this data, we formalize World Scene Graph Generation (WSGG), the task of constructing a world scene graph at each timestamp that encompasses all interacting objects in the scene, both observed and unobserved. We then propose three complementary methods, each exploring a different inductive bias for reasoning about unobserved objects: PWG (Persistent World Graph), which implements object permanence via a zero-order feature buffer; MWAE (Masked World Auto-Encoder), which reframes unobserved-object reasoning as masked completion with cross-view associative retrieval; and 4DST (4D Scene Transformer), which replaces the static buffer with differentiable per-object temporal attention enriched by 3D motion and camera-pose features. We further design and evaluate the performance of strong open-source Vision-Language Models on the WSGG task via a suite of Graph RAG-based approaches, establishing baselines for unlocalized relationship prediction. WSGG thus advances video scene understanding toward world-centric, temporally persistent, and interpretable scene reasoning.