Grounding Video Reasoning in Physical Signals

2026-04-23Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created a new test to better understand how well computer models can recognize and locate physical events in videos, like pouring or sliding. Their test checks if models can answer questions about what happened, when it happened, and where it happened across different video types and physics topics. They found that models do best at understanding physics-related questions but have more trouble accurately pinpointing the event’s location. The study suggests future tests should include detailed checks that consider different question styles and how changes to the videos affect model performance.

physical video understandingevent localizationvideo question answeringgrounded benchmarktemporal groundingspatial groundingphysics domainsvideo perturbationsSSV2 datasetprompt robustness
Authors
Alibay Osmanli, Zixu Cheng, Shaogang Gong
Abstract
Physical video understanding requires more than naming an event correctly. A model can answer a question about pouring, sliding, or collision from textual regularities while still failing to localize the event in time or space. We introduce a grounded benchmark for physical video understanding that extends the what--when--where evaluation structure of V-STaR to four video sources, six physics domains, three prompt families (physics, vstar_like, and neutral_rstr), and four input conditions (original, shuffled, ablated, and frame-masked). The benchmark contains 1,560 base video clips from SSV2, YouCook2, HoloAssist, and Roundabout-TAU. Each clip is first converted into a shared grounded event record, and the three query families are derived from that record. Temporal and spatial targets are shared across prompt families, while the non-physics families use deterministic family-appropriate semantic a_what targets derived from the same record. Across models and prompt families, physics remains the strongest regime overall, vstar_like is the clearest non-physics semantic comparison, and neutral_rstr behaves as a harder templated control. Prompt-family robustness is selective rather than universal, perturbation gains cluster in weak original cases, and spatial grounding is the weakest across settings. These results suggest that video Q&A reasoning benchmarks shall report physically grounded, prompt-aware, and perturbation-aware diagnostics alongside aggregate accuracy.