SlotVTG: Object-Centric Adapter for Generalizable Video Temporal Grounding
2026-03-26 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors look at how big language models that also understand videos struggle to pinpoint exact moments in videos without extra training, which can make them rely too much on dataset quirks. They suggest a new method called SlotVTG that helps these models focus on individual objects in a scene by breaking the video into meaningful parts, using a small adapter that works with existing models. This approach improves how well the models work on new types of videos they haven't seen before, without needing a lot more computing power or losing accuracy on familiar data. Overall, the authors show a way to make video understanding more precise and adaptable.
Multimodal Large Language ModelsVideo Temporal GroundingFine-tuningOut-of-Domain GeneralizationObject-centric LearningSlot AttentionSelf-supervised Vision ModelsVisual TokensCross-domain Evaluation
Authors
Jiwook Han, Geo Ahn, Youngrae Kim, Jinwoo Choi
Abstract
Multimodal Large Language Models (MLLMs) have shown strong performance on Video Temporal Grounding (VTG). However, their coarse recognition capabilities are insufficient for fine-grained temporal understanding, making task-specific fine-tuning indispensable. This fine-tuning causes models to memorize dataset-specific shortcuts rather than faithfully grounding in the actual visual content, leading to poor Out-of-Domain (OOD) generalization. Object-centric learning offers a promising remedy by decomposing scenes into entity-level representations, but existing approaches require re-running the entire multi-stage training pipeline from scratch. We propose SlotVTG, a framework that steers MLLMs toward object-centric, input-grounded visual reasoning at minimal cost. SlotVTG introduces a lightweight slot adapter that decomposes visual tokens into abstract slots via slot attention and reconstructs the original sequence, where objectness priors from a self-supervised vision model encourage semantically coherent slot formation. Cross-domain evaluation on standard VTG benchmarks demonstrates that our approach significantly improves OOD robustness while maintaining competitive In-Domain (ID) performance with minimal overhead.