Tinted Frames: Question Framing Blinds Vision-Language Models
2026-03-19 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors found that Vision-Language Models (VLMs) do not always use visual information well and that this depends on how the question is asked. When the question is simple, like yes/no or multiple choice, the models pay less attention to important parts of the image. This leads to worse answers and inconsistency when the same visual reasoning is needed but phrased differently. The authors showed this distraction in attention causes errors and created a method to guide the model's focus better, improving its accuracy across different question types.
Vision-Language Modelsvisual attentionlinguistic framingvisual groundingprompt tuningmultiple choice tasksyes/no questionsopen-ended questionsattention distributioncross-framing inconsistency
Authors
Wan-Cyuan Fan, Jiayun Luo, Declan Kutscher, Leonid Sigal, Ritwik Gupta
Abstract
Vision-Language Models (VLMs) have been shown to be blind, often underutilizing their visual inputs even on tasks that require visual reasoning. In this work, we demonstrate that VLMs are selectively blind. They modulate the amount of attention applied to visual inputs based on linguistic framing even when alternative framings demand identical visual reasoning. Using visual attention as a probe, we quantify how framing alters both the amount and distribution of attention over the image. Constrained framings, such as multiple choice and yes/no, induce substantially lower attention to image context compared to open-ended, reduce focus on task-relevant regions, and shift attention towards uninformative tokens. We further demonstrate that this attention misallocation is the principal cause of degraded accuracy and cross-framing inconsistency. Building on this mechanistic insight, we introduce a lightweight prompt-tuning method using learnable tokens that encourages the robust, visually grounded attention patterns observed in open-ended settings, improving visual grounding and improving performance across framings.