Why Do Vision Language Models Struggle To Recognize Human Emotions?

2026-04-16Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial Intelligence
AI summary

The authors study why vision-language models (VLMs) struggle to recognize human emotions, especially in dynamic facial expressions. They find VLMs often misclassify rare emotions because of biases in training data, and they cannot effectively use the quick changes in facial movements due to memory and input limits. To address this, the authors suggest better sampling methods and a new approach that summarizes the in-between video frames as text, helping the model understand emotional changes without being overwhelmed by too much visual information. This helps preserve important brief expressions often missed by current models.

Vision-Language ModelsDynamic Facial Expression RecognitionLong-Tailed DistributionRare Emotion ClassificationTemporal InformationMicro-ExpressionsSparse Temporal SamplingContext EnrichmentNatural Language SummariesEmotional Trajectory
Authors
Madhav Agarwal, Sotirios A. Tsaftaris, Laura Sevilla-Lara, Steven McDonagh
Abstract
Understanding emotions is a fundamental ability for intelligent systems to be able to interact with humans. Vision-language models (VLMs) have made tremendous progress in the last few years for many visual tasks, potentially offering a promising solution for understanding emotions. However, it is surprising that even the most sophisticated contemporary VLMs struggle to recognize human emotions or to outperform even specialized vision-only classifiers. In this paper we ask the question "Why do VLMs struggle to recognize human emotions?", and observe that the inherently continuous and dynamic task of facial expression recognition (DFER) exposes two critical VLM vulnerabilities. First, emotion datasets are naturally long-tailed, and the web-scale data used to pre-train VLMs exacerbates this head-class bias, causing them to systematically collapse rare, under-represented emotions into common categories. We propose alternative sampling strategies that prevent favoring common concepts. Second, temporal information is critical for understanding emotions. However, VLMs are unable to represent temporal information over dense frame sequences, as they are limited by context size and the number of tokens that can fit in memory, which poses a clear challenge for emotion recognition. We demonstrate that the sparse temporal sampling strategy used in VLMs is inherently misaligned with the fleeting nature of micro-expressions (0.25-0.5 seconds), which are often the most critical affective signal. As a diagnostic probe, we propose a multi-stage context enrichment strategy that utilizes the information from "in-between" frames by first converting them into natural language summaries. This enriched textual context is provided as input to the VLM alongside sparse keyframes, preventing attentional dilution from excessive visual data while preserving the emotional trajectory.