How AI Aggregation Affects Knowledge
2026-04-06 • Artificial Intelligence
Artificial IntelligenceComputers and SocietySocial and Information Networks
AI summaryⓘ
The authors study how artificial intelligence (AI) systems that learn from people's beliefs can influence social learning over time. They extend a classic model by adding an AI that collects data, learns from it, and then sends information back to people. They find that if the AI updates its learning too quickly, it may not help improve knowledge, but slower updating can be beneficial. The authors also show that using many local AI systems focused on specific topics works better than a single global AI for improving learning.
Artificial IntelligenceDeGroot modelsocial learninglearning gapAI aggregatorupdating speedlocal aggregatorsglobal aggregatortraining data
Authors
Daron Acemoglu, Tianyi Lin, Asuman Ozdaglar, James Siderius
Abstract
Artificial intelligence (AI) changes social learning when aggregated outputs become training data for future predictions. To study this, we extend the DeGroot model by introducing an AI aggregator that trains on population beliefs and feeds synthesized signals back to agents. We define the learning gap as the deviation of long-run beliefs from the efficient benchmark, allowing us to capture how AI aggregation affects learning. Our main result identifies a threshold in the speed of updating: when the aggregator updates too quickly, there is no positive-measure set of training weights that robustly improves learning across a broad class of environments, whereas such weights exist when updating is sufficiently slow. We then compare global and local architectures. Local aggregators trained on proximate or topic-specific data robustly improve learning in all environments. Consequently, replacing specialized local aggregators with a single global aggregator worsens learning in at least one dimension of the state.