Vision-Language Models vs Human: Perceptual Image Quality Assessment
2026-03-25 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors studied if Vision Language Models (VLMs) can judge image quality like humans do, focusing on contrast, colorfulness, and overall preference. They tested six VLMs and compared their scores to human data. They found that VLMs were good at matching human opinions on colorfulness but not as good on contrast, and vice versa. The study also showed that VLMs tend to weigh colorfulness more when deciding overall preference, similar to humans. Additionally, models that were very consistent internally did not always match human judgments better, suggesting they respond differently depending on the image.
Vision Language ModelsImage Quality AssessmentPsychophysicsContrastColorfulnessPerceptual JudgmentModel BenchmarkingHuman AlignmentAttribute WeightingStimulus Separability
Authors
Imran Mehmood, Imad Ali Shah, Ming Ronnier Luo, Brian Deegan
Abstract
Psychophysical experiments remain the most reliable approach for perceptual image quality assessment (IQA), yet their cost and limited scalability encourage automated approaches. We investigate whether Vision Language Models (VLMs) can approximate human perceptual judgments across three image quality scales: contrast, colorfulness and overall preference. Six VLMs four proprietary and two openweight models are benchmarked against psychophysical data. This work presents a systematic benchmark of VLMs for perceptual IQA through comparison with human psychophysical data. The results reveal strong attribute dependent variability models with high human alignment for colorfulness (ρup to 0.93) underperform on contrast and vice-versa. Attribute weighting analysis further shows that most VLMs assign higher weights to colorfulness compared to contrast when evaluating overall preference similar to the psychophysical data. Intramodel consistency analysis reveals a counterintuitive tradeoff: the most self consistent models are not necessarily the most human aligned suggesting response variability reflects sensitivity to scene dependent perceptual cues. Furthermore, human-VLM agreement is increased with perceptual separability, indicating VLMs are more reliable when stimulus differences are clearly expressed.