Strategic Algorithmic Monoculture:Experimental Evidence from Coordination Games
2026-04-10 • Artificial Intelligence
Artificial IntelligenceComputer Science and Game TheoryMultiagent Systems
AI summaryⓘ
The authors studied how both humans and AI language models (LLMs) choose actions when they need to work with others. They differentiate between two ideas: baseline similarity, where agents naturally act alike, and strategic similarity, where agents change how much they act alike based on rewards for coordination. Their experiments showed that LLMs tend to act very similarly by default and adjust their behavior when coordination is needed, just like humans. However, unlike humans, LLMs are less able to maintain different actions when being different is actually better.
multi-agent environmentscoordinationalgorithmic monoculturebaseline similaritystrategic similaritylarge language modelsheterogeneityincentiveshuman-AI comparison
Authors
Gonzalo Ballestero, Hadi Hosseini, Samarth Khanna, Ran I. Shorrer
Abstract
AI agents increasingly operate in multi-agent environments where outcomes depend on coordination. We distinguish primary algorithmic monoculture -- baseline action similarity -- from strategic algorithmic monoculture, whereby agents adjust similarity in response to incentives. We implement a simple experimental design that cleanly separates these forces, and deploy it on human and large language model (LLM) subjects. LLMs exhibit high levels of baseline similarity (primary monoculture) and, like humans, they regulate it in response to coordination incentives (strategic monoculture). While LLMs coordinate extremely well on similar actions, they lag behind humans in sustaining heterogeneity when divergence is rewarded.