HaloProbe: Bayesian Detection and Mitigation of Object Hallucinations in Vision-Language Models

2026-04-07Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionMachine Learning
AI summary

The authors studied a problem where large models that describe images sometimes mention things not actually in the image, called hallucinations. They found that previous ways of detecting these errors using attention patterns are unreliable because of confusing factors like where words appear and repeated objects. To fix this, they created HaloProbe, a method that combines information from the description and the model’s internal signals to better spot when hallucinations happen. Using HaloProbe to guide how the model generates descriptions helps reduce mistakes without hurting the quality of the text, unlike other methods that change the model itself.

vision-language modelsobject hallucinationattention weightsSimpson's paradoxBayesian frameworktoken-level probabilitiesmodel decodingnon-invasive mitigationimage captioningbalanced training
Authors
Reihaneh Zohrabi, Hosein Hasani, Akshita Gupta, Mahdieh Soleymani Baghshah, Anna Rohrbach, Marcus Rohrbach
Abstract
Large vision-language models can produce object hallucinations in image descriptions, highlighting the need for effective detection and mitigation strategies. Prior work commonly relies on the model's attention weights on visual tokens as a detection signal. We reveal that coarse-grained attention-based analysis is unreliable due to hidden confounders, specifically token position and object repetition in a description. This leads to Simpson's paradox: the attention trends reverse or disappear when statistics are aggregated. Based on this observation, we introduce HaloProbe, a Bayesian framework that factorizes external description statistics and internal decoding signals to estimate token-level hallucination probabilities. HaloProbe uses balanced training to isolate internal evidence and combines it with learned prior over external features to recover the true posterior. While intervention-based mitigation methods often degrade utility or fluency by modifying models' internals, we use HaloProbe as an external scoring signal for non-invasive mitigation. Our experiments show that HaloProbe-guided decoding reduces hallucinations more effectively than state-of-the-art intervention-based methods while preserving utility.