Representational Harms in LLM-Generated Narratives Against Global Majority Nationalities

2026-04-24Computation and Language

Computation and Language
AI summary

The authors studied how large language models (LLMs) describe people from different countries when asked to create stories. They found that these models often show unfair biases, like stereotypes or ignoring some groups, especially for non-US nationalities. People from less dominant countries are portrayed less positively and more often shown in weaker or negative roles. The authors highlight that these biases remain even when the models are given non-US nationalities, suggesting deeper issues. They suggest more research focused on the perspectives of the majority of the world's population to reduce these problems.

Large Language ModelsBiasNational OriginStereotypesRepresentationGlobal MajorityNarrative GenerationUS-CentrismMinoritized IdentitiesCultural Harms
Authors
Ilana Nguyen, Harini Suresh, Thema Monroe-White, Evan Shieh
Abstract
Large language models (LLMs) are increasingly used for text generation tasks from everyday use to high-stakes enterprise and government applications, including simulated interviews with asylum seekers. While many works highlight the new potential applications of LLMs, there are risks of LLMs encoding and perpetuating harmful biases about non-dominant communities across the globe. To better evaluate and mitigate such harms, more research examining how LLMs portray diverse individuals is needed. In this work, we study how national origin identities are portrayed by widely-adopted LLMs in response to open-ended narrative generation prompts. Our findings demonstrate the presence of persistent representational harms by national origin, including harmful stereotypes, erasure, and one-dimensional portrayals of Global Majority identities. Minoritized national identities are simultaneously underrepresented in power-neutral stories and overrepresented in subordinated character portrayals, which are over fifty times more likely to appear than dominant portrayals. The degree of harm is amplified when US nationality cues (e.g., ``American'') are present in input prompts. Notably, we find that the harms we identify cannot be explained away via sycophancy, as US-centric biases persist even when replacing US nationality cues with non-US national identities in the prompts. Based on our findings, we call for further exploration of cultural harms in LLMs through methodologies that center Global Majority perspectives and challenge the uncritical adoption of US-based LLMs for the classification, surveillance, and misrepresentation of the majority of our planet.