SocialOmni: Benchmarking Audio-Visual Social Interactivity in Omni Models

2026-03-17Artificial Intelligence

Artificial Intelligence
AI summary

The authors introduce SocialOmni, a new test for multimodal language models that looks at how well they handle social conversations, not just how accurate they are. Their benchmark checks if models can tell who is speaking, decide the right moment to interrupt, and generate natural-sounding interruptions. They tested 12 top models and found that being good at recognizing speech doesn't always mean the model is good at social interaction. The study shows that future models need to focus more on social skills, not just understanding.

omni-modal large language modelssocial interactivityspeaker identificationinterruption timingnatural language generationbenchmarkaudio-visual perceptiondialogue systemsmodel robustnessconversational AI
Authors
Tianyu Xie, Jinfa Huang, Yuexiao Ma, Rongfang Luo, Yan Yang, Wang Chen, Yuhui Zeng, Ruize Fang, Yixuan Zou, Xiawu Zheng, Jiebo Luo, Rongrong Ji
Abstract
Omni-modal large language models (OLMs) redefine human-machine interaction by natively integrating audio, vision, and text. However, existing OLM benchmarks remain anchored to static, accuracy-centric tasks, leaving a critical gap in assessing social interactivity, the fundamental capacity to navigate dynamic cues in natural dialogues. To this end, we propose SocialOmni, a comprehensive benchmark that operationalizes the evaluation of this conversational interactivity across three core dimensions: (i) speaker separation and identification (who is speaking), (ii) interruption timing control (when to interject), and (iii) natural interruption generation (how to phrase the interruption). SocialOmni features 2,000 perception samples and a quality-controlled diagnostic set of 209 interaction-generation instances with strict temporal and contextual constraints, complemented by controlled audio-visual inconsistency scenarios to test model robustness. We benchmarked 12 leading OLMs, which uncovers significant variance in their social-interaction capabilities across models. Furthermore, our analysis reveals a pronounced decoupling between a model's perceptual accuracy and its ability to generate contextually appropriate interruptions, indicating that understanding-centric metrics alone are insufficient to characterize conversational social competence. More encouragingly, these diagnostics from SocialOmni yield actionable signals for bridging the perception-interaction divide in future OLMs.