When Prompts Override Vision: Prompt-Induced Hallucinations in LVLMs

2026-04-23Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceComputation and LanguageMachine Learning
AI summary

The authors studied why big vision-language models sometimes make things up that are not seen in the image, called hallucinations. They found that these mistakes mostly happen because the models rely too much on what they expect from text instructions rather than the actual pictures. To fix this, they created a new way to fine-tune these models so they give answers based more on the images and less on guesses. Their improved model makes fewer hallucinations while still doing well on other tests. They also shared their tools and data to help others continue this work.

large vision-language modelshallucinationstextual priorsfine-tuningpreference optimizationbenchmarkvisual groundingbackground knowledgelanguage modelvision backbone
Authors
Pegah Khayatan, Jayneel Parekh, Arnaud Dapogny, Mustafa Shukor, Alasdair Newson, Matthieu Cord
Abstract
Despite impressive progress in capabilities of large vision-language models (LVLMs), these systems remain vulnerable to hallucinations, i.e., outputs that are not grounded in the visual input. Prior work has attributed hallucinations in LVLMs to factors such as limitations of the vision backbone or the dominance of the language component, yet the relative importance of these factors remains unclear. To resolve this ambiguity, We propose HalluScope, a benchmark to better understand the extent to which different factors induce hallucinations. Our analysis indicates that hallucinations largely stem from excessive reliance on textual priors and background knowledge, especially information introduced through textual instructions. To mitigate hallucinations induced by textual instruction priors, we propose HalluVL-DPO, a framework for fine-tuning off-the-shelf LVLMs towards more visually grounded responses. HalluVL-DPO leverages preference optimization using a curated training dataset that we construct, guiding the model to prefer grounded responses over hallucinated ones. We demonstrate that our optimized model effectively mitigates the targeted hallucination failure mode, while preserving or improving performance on other hallucination benchmarks and visual capability evaluations. To support reproducibility and further research, we will publicly release our evaluation benchmark, preference training dataset, and code at https://pegah-kh.github.io/projects/prompts-override-vision/ .