Seeing the Whole Elephant: A Benchmark for Failure Attribution in LLM-based Multi-Agent Systems

2026-04-24Multiagent Systems

Multiagent Systems
AI summary

The authors explain that figuring out why multi-agent systems powered by large language models fail is hard because the agents think and interact in complex ways. Existing tests to check these failures only look at what the agents say, not what they actually see or do inside. They created a new test called TraceElephant that records everything happening in the system, making it easier to find the cause of problems. Their results show that having full information boosts the accuracy of identifying failures by a lot. This work helps researchers build better ways to understand and fix these systems.

Failure attributionMulti-agent systemsLarge language modelsBenchmarkExecution traceReproducibilityDebuggingPartial observabilityNatural-language reasoningSystem evaluation
Authors
Mengzhuo Chen, Junjie Wang, Fangwen Mu, Yawen Wang, Zhe Liu, Huanxiang Feng, Qing Wang
Abstract
Failure attribution, i.e., identifying the responsible agent and decisive step of a failure, is particularly challenging in LLM-based multi-agent systems (MAS) due to their natural-language reasoning, nondeterministic outputs, and intricate interaction dynamics. A reliable benchmark is therefore essential to guide and evaluate attribution techniques. Yet existing benchmarks rely on partially observable traces that capture only agent outputs, omitting the inputs and context that developers actually use when debugging. We argue that failure attribution should be studied under full execution observability, aligning with real-world developer-facing scenarios where complete traces, rather than only outputs, are accessible for diagnosis. To this end, we introduce TraceElephant, a benchmark designed for failure attribution with full execution traces and reproducible environments. We then systematically evaluate failure attribution techniques across various configurations. Specifically, full traces improve attribution accuracy by up to 76\% over a partial-observation counterpart, confirming that missing inputs obscure many failure causes. TraceElephant provides a foundation for follow-up failure attribution research, promoting evaluation practices that reflect real-world debugging and supporting the development of more transparent MASs.