Feeling the Space: Egomotion-Aware Video Representation for Efficient and Accurate 3D Scene Understanding
2026-03-18 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors improved large language models that analyze 3D scenes by adding information from motion sensors called IMUs, which are recorded along with video. They created a system named Motion-MLLM that smartly picks important video frames using both motion and visual data, and then combines this information so the model better understands how things move in space. This helps the model figure out real-world sizes and relationships more accurately without needing heavy 3D data like point clouds. Their tests show Motion-MLLM matches or beats current top methods while being more efficient.
Multimodal Large Language Models3D spatial reasoningInertial Measurement Units (IMU)egomotionkeyframe filteringcross-modal fusionBird's-Eye View (BEV)point cloudsscene understandingscale grounding
Authors
Shuyao Shi, Kang G. Shin
Abstract
Recent Multimodal Large Language Models (MLLMs) have shown high potential for spatial reasoning within 3D scenes. However, they typically rely on computationally expensive 3D representations like point clouds or reconstructed Bird's-Eye View (BEV) maps, or lack physical grounding to resolve ambiguities in scale and size. This paper significantly enhances MLLMs with egomotion modality data, captured by Inertial Measurement Units (IMUs) concurrently with the video. In particular, we propose a novel framework, called Motion-MLLM, introducing two key components: (1) a cascaded motion-visual keyframe filtering module that leverages both IMU data and visual features to efficiently select a sparse yet representative set of keyframes, and (2) an asymmetric cross-modal fusion module where motion tokens serve as intermediaries that channel egomotion cues and cross-frame visual context into the visual representation. By grounding visual content in physical egomotion trajectories, Motion-MLLM can reason about absolute scale and spatial relationships across the scene. Our extensive evaluation shows that Motion-MLLM makes significant improvements in various tasks related to 3D scene understanding and spatial reasoning. Compared to state-of-the-art (SOTA) methods based on video frames and explicit 3D data, Motion-MLLM exhibits similar or even higher accuracy with significantly less overhead (i.e., 1.40$\times$ and 1.63$\times$ higher cost-effectiveness, respectively).