Deception and Communication in Autonomous Multi-Agent Systems: An Experimental Study with Among Us
2026-03-27 • Multiagent Systems
Multiagent Systems
AI summaryⓘ
The authors studied how large language model agents behave when playing the social deduction game Among Us, focusing on how they communicate and deceive. They found that agents mostly use commands or suggestions, but impostor agents also tend to give explanations or denials. Instead of telling outright lies, the agents often use vague or ambiguous language, especially when under pressure, though this doesn’t usually help them win more. The study shows that these AI agents balance being truthful and trying to be useful in tricky ways.
large language modelsautonomous agentssocial deduction gamesAmong Usdeceptionspeech act theoryinterpersonal deception theorydirective languageequivocationmulti-agent systems
Authors
Maria Milkowski, Tim Weninger
Abstract
As large language models are deployed as autonomous agents, their capacity for strategic deception raises core questions for coordination, reliability, and safety in multi-goal, multi-agent systems. We study deception and communication in L2LM agents through the social deduction game Among Us, a cooperative-competitive environment. Across 1,100 games, autonomous agents produced over one million tokens of meeting dialogue. Using speech act theory and interpersonal deception theory, we find that all agents rely mainly on directive language, while impostor agents shift slightly toward representative acts such as explanations and denials. Deception appears primarily as equivocation rather than outright lies, increasing under social pressure but rarely improving win rates. Our contributions are a large-scale analysis of role-conditioned deceptive behavior in LLM agents and empirical evidence that current agents favor low-risk ambiguity that is linguistically subtle yet strategically limited, revealing a fundamental tension between truthfulness and utility in autonomous communication.