No Universal Courtesy: A Cross-Linguistic, Multi-Model Study of Politeness Effects on LLMs Using the PLUM Corpus

2026-04-17Computation and Language

Computation and Language
AI summary

The authors studied how large language models (LLMs) respond to user prompts that vary in politeness across three languages: English, Hindi, and Spanish. They tested five different models and found that polite prompts generally improve response quality, while impolite ones tend to reduce it, though this effect changes depending on the language and model. For example, English models prefer polite or direct tones, while Hindi favors deferential tones, and Spanish works better with assertive tones. The authors also created a new dataset called PLUM to help future research on politeness in multiple languages.

Large Language ModelsPoliteness TheoryImpoliteness FrameworkMultilingual NLPPrompt ToneModel RobustnessDialogue HistoryResponse QualityPoliteness CorpusCross-lingual Communication
Authors
Hitesh Mehta, Arjit Saxena, Garima Chhikara, Rohit Kumar
Abstract
This paper explores the response of Large Language Models (LLMs) to user prompts with different degrees of politeness and impoliteness. The Politeness Theory by Brown and Levinson and the Impoliteness Framework by Culpeper form the basis of experiments conducted across three languages (English, Hindi, Spanish), five models (Gemini-Pro, GPT-4o Mini, Claude 3.7 Sonnet, DeepSeek-Chat, and Llama 3), and three interaction histories between users (raw, polite, and impolite). Our sample consists of 22,500 pairs of prompts and responses of various types, evaluated across five levels of politeness using an eight-factor assessment framework: coherence, clarity, depth, responsiveness, context retention, toxicity, conciseness, and readability. The findings show that model performance is highly influenced by tone, dialogue history, and language. While polite prompts enhance the average response quality by up to ~11% and impolite tones worsen it, these effects are neither consistent nor universal across languages and models. English is best served by courteous or direct tones, Hindi by deferential and indirect tones, and Spanish by assertive tones. Among the models, Llama is the most tone-sensitive (11.5% range), whereas GPT is more robust to adversarial tone. These results indicate that politeness is a quantifiable computational variable that affects LLM behaviour, though its impact is language- and model-dependent rather than universal. To support reproducibility and future work, we additionally release PLUM (Politeness Levels in Utterances, Multilingual), a publicly available corpus of 1,500 human-validated prompts across three languages and five politeness categories, and provide a formal supplementary analysis of six falsifiable hypotheses derived from politeness theory, empirically assessed against the dataset.