Do Vision-Language Models Truly Perform Vision Reasoning? A Rigorous Study of the Modality Gap
2026-04-17 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionComputation and Language
AI summaryⓘ
The authors created a new test called CrossMath to see how well vision-language models (VLMs) can solve problems using text, images, or both together. They made sure the problems had exactly the same information in all formats, so they could fairly compare how the models use each type of input. They found that models did better with text alone and often got worse when images were added, showing these models mostly rely on text reasoning and don't use visual data effectively. After training the models on the new CrossMath dataset, the authors showed that the models improved in reasoning with both images and text.
vision-language modelsmultimodal reasoningbenchmarkfine-tuningtextual backbonevisual evidenceCrossMathmodel evaluationimage+text inputsreasoning tasks
Authors
Yige Xu, Yongjie Wang, Zizhuo Wu, Kaisong Song, Jun Lin, Zhiqi Shen
Abstract
Reasoning in vision-language models (VLMs) has recently attracted significant attention due to its broad applicability across diverse downstream tasks. However, it remains unclear whether the superior performance of VLMs stems from genuine vision-grounded reasoning or relies predominantly on the reasoning capabilities of their textual backbones. To systematically measure this, we introduce CrossMath, a novel multimodal reasoning benchmark designed for controlled cross-modal comparisons. Specifically, we construct each problem in text-only, image-only, and image+text formats guaranteeing identical task-relevant information, verified by human annotators. This rigorous alignment effectively isolates modality-specific reasoning differences while eliminating confounding factors such as information mismatch. Extensive evaluation of state-of-the-art VLMs reveals a consistent phenomenon: a substantial performance gap between textual and visual reasoning. Notably, VLMs excel with text-only inputs, whereas incorporating visual data (image+text) frequently degrades performance compared to the text-only baseline. These findings indicate that current VLMs conduct reasoning primarily in the textual space, with limited genuine reliance on visual evidence. To mitigate this limitation, we curate a CrossMath training set for VLM fine-tuning. Empirical evaluations demonstrate that fine-tuning on this training set significantly boosts reasoning performance across all individual and joint modalities, while yielding robust gains on two general visual reasoning tasks. Source code is available at https://github.com/xuyige/CrossMath.