EVA-Bench: A New End-to-end Framework for Evaluating Voice Agents

2026-05-13Sound

SoundArtificial IntelligenceComputation and LanguageMachine Learning
AI summary

The authors created EVA-Bench, a new way to test voice agents, which are AI systems that talk to people to help with tasks. Their tool can simulate realistic conversations between two bots and automatically fix errors before measuring quality. They also made two special scores: one for how accurate and clear the agent is, and another for how smooth and natural the conversation feels. They tested many systems and found none that do well on both scores at the same time, and noticed that accent and background noise can make things harder for these voice agents. The full testing framework and data are openly shared for others to use.

voice agentssimulated conversationsevaluation metricstask completionspeech fidelityaccent robustnessnoise robustnessmulti-turn dialoguebot-to-bot conversationopen-source benchmark
Authors
Tara Bogavelli, Gabrielle Gauthier Melançon, Katrina Stankiewicz, Oluwanifemi Bamgbose, Fanny Riols, Hoang H. Nguyen, Raghav Mehndiratta, Lindsay Devon Brin, Joseph Marinier, Hari Subramani, Anil Madamala, Sridhar Krishna Nemala, Srinivas Sunkara
Abstract
Voice agents, artificial intelligence systems that conduct spoken conversations to complete tasks, are increasingly deployed across enterprise applications. However, no existing benchmark jointly addresses two core evaluation challenges: generating realistic simulated conversations, and measuring quality across the full scope of voice-specific failure modes. We present EVA-Bench, an end-to-end evaluation framework that addresses both. On the simulation side, EVA-Bench orchestrates bot-to-bot audio conversations over dynamic multi-turn dialogues, with automatic simulation validation that detects user simulator error and appropriately regenerates conversations before scoring. On the measurement side, EVA-Bench introduces two composite metrics: EVA-A (Accuracy), capturing task completion, faithfulness, and audio-level speech fidelity; and EVA-X (Experience), capturing conversation progression, spoken conciseness, and turn-taking timing. Both metrics apply to different agent architectures, enabling direct cross-architecture comparison. EVA-Bench includes 213 scenarios across three enterprise domains, a controlled perturbation suite for accent and noise robustness, and pass@1, pass@k, pass^k measurements that distinguish peak from reliable capability. Across 12 systems spanning all three architectures, we find: (1) no system simultaneously exceeds 0.5 on both EVA-A pass@1 and EVA-X pass@1; (2) peak and reliable performance diverge substantially (median pass@k - pass^k gap of 0.44 on EVA-A); and (3) accent and noise perturbations expose substantial robustness gaps, with effects varying across architectures, systems, and metrics (mean up to 0.314). We release the full framework, evaluation suite, and benchmark data under an open-source license.