MME-CoF-Pro: Evaluating Reasoning Coherence in Video Generative Models with Text and Visual Hints

2026-03-20Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created a test called MME-CoF-Pro to check how well video generating models keep their stories making sense across frames, which they call reasoning coherence. Their test includes 303 video samples and measures if the models follow logical steps during event generation using a Reasoning Score. They tried three ways to help models reason: no hints, text hints, and visual hints. Their findings show current models are not very good at keeping reasoning consistent, text hints can make things seem correct but sometimes confuse the story, and visual hints help in simple perception tasks but not detailed ones.

video generative modelsreasoning coherencebenchmarkReasoning Scoretext hintsvisual hintscausal consistencyvideo reasoningevaluation metricsmodel hallucination
Authors
Yu Qi, Xinyi Xu, Ziyu Guo, Siyuan Ma, Renrui Zhang, Xinyan Chen, Ruichuan An, Ruofan Xing, Jiayi Zhang, Haojie Huang, Pheng-Ann Heng, Jonathan Tremblay, Lawson L. S. Wong
Abstract
Video generative models show emerging reasoning behaviors. It is essential to ensure that generated events remain causally consistent across frames for reliable deployment, a property we define as reasoning coherence. To bridge the gap in literature for missing reasoning coherence evaluation, we propose MME-CoF-Pro, a comprehensive video reasoning benchmark to assess reasoning coherence in video models. Specifically, MME-CoF-Pro contains 303 samples across 16 categories, ranging from visual logical to scientific reasoning. It introduces Reasoning Score as evaluation metric for assessing process-level necessary intermediate reasoning steps, and includes three evaluation settings, (a) no hint (b) text hint and (c) visual hint, enabling a controlled investigation into the underlying mechanisms of reasoning hint guidance. Evaluation results in 7 open and closed-source video models reveals insights including: (1) Video generative models exhibit weak reasoning coherence, decoupled from generation quality. (2) Text hints boost apparent correctness but often cause inconsistency and hallucinated reasoning (3) Visual hints benefit structured perceptual tasks but struggle with fine-grained perception. Website: https://video-reasoning-coherence.github.io/