The Algorithmic Caricature: Auditing LLM-Generated Political Discourse Across Crisis Events

2026-05-12Computation and Language

Computation and LanguageArtificial IntelligenceComputers and Society
AI summary

The authors studied political posts from nine major crises and compared real social media discussions with texts generated by AI. They found that while AI texts are fluent, they don't behave like real online conversations at the population level. AI-generated posts tend to be more negative, repetitive, and abstract, while real posts show more emotional variety and informal language. These differences vary depending on the type of event, being bigger in fast-moving crises. The authors suggest that evaluating AI texts by how realistic they appear socially is a useful way to understand their limits beyond just checking grammar.

Large Language ModelsSynthetic DiscoursePolitical Text GenerationEmotional IntensityLexical FramingStructural RegularityCrisis EventsSocial RealismPopulation-level AnalysisComputational Social Science
Authors
Gunjan, Sidahmed Benabderrahmane, Talal Rahwan
Abstract
Large Language Models (LLMs) can generate fluent political text at scale, raising concerns about synthetic discourse during crises and social conflict. Existing AI-text detection often focuses on sentence-level cues such as perplexity, burstiness, or token irregularities, but these signals may weaken as generative systems improve. We instead adopt a Computational Social Science perspective and ask whether synthetic political discourse behaves like an observed online population. We construct a paired corpus of 1,789,406 posts across nine crisis events: COVID-19, the Jan. 6 Capitol attack, the 2020 and 2024 U.S. elections, Dobbs/Roe v. Wade, the 2020 BLM protests, U.S. midterms, the Utah shooting, and the U.S.-Iran war. For each event, we compare observed discourse from social platforms with synthetic discourse generated for the same context. We evaluate four dimensions: emotional intensity, structural regularity, lexical-ideological framing, and cross-event dependency, using mean gaps and dispersion evidence. Across events, synthetic discourse is fluent but population-level unrealistic. It is generally more negative and less dispersed in sentiment, structurally more regular, and lexically more abstract than observed discourse. Observed discourse instead shows broader emotional variation, longer-tailed structural distributions, and more context-specific, colloquial lexical markers. These differences are event-dependent: larger for fast-moving, decentralized crises and smaller for formal or institutionally mediated events. We summarize them with a simple event-level measure, the Caricature Gap. Our findings suggest that the main limitation of synthetic political discourse is not grammar or fluency, but reduced population realism. Population-level auditing complements traditional text-detection and provides a CSS framework for evaluating the social realism of generated discourse.