When Are LLM Inferences Acceptable? User Reactions and Control Preferences for Inferred Personal Information
2026-05-11 • Human-Computer Interaction
Human-Computer InteractionCryptography and Security
AI summaryⓘ
The authors studied how people feel about the hidden guesses (inferences) that ChatGPT makes based on their conversations, like guessing income or medical history. They created a tool to show users examples of these inferences from their own chat history and asked 18 regular users about them. Surprisingly, most users were curious rather than worried, getting uncomfortable mostly when the guesses felt wrong or misused. Users were less okay with advertisers using these guesses than with the chatbot platform itself. The authors suggest that whether these inferences are acceptable depends on who makes them, how they are used, and where they stay.
Large Language ModelsChatGPTUser PrivacyInferencesData VisualizationMixed-Methods StudyUser ExperienceData Use NormsThird-Party Applications
Authors
Kyzyl Monteiro, Minjung Park, Alexander Ioffrida, Angelina Sanna, Hao-Ping, Lee, Niloofar Mireshghallah, Yang Wang, Sauvik Das
Abstract
Ask ChatGPT about vacation planning, and it may infer your income. Ask it about medication, and it may infer your medical history. Because such inferences can expose more information than users intend to reveal, prior work argues that they are a defining privacy risk of LLM-based systems. Yet prior work has mostly shown that LLMs can make potentially violating inferences, not how users experience those inferences nor what controls users may want governing their use. We built the Reflective Layer, a visualization tool that surfaces example unstated inferences from users' own ChatGPT histories, and used it in a mixed-methods study with 18 regular ChatGPT users evaluating 215 surfaced inferences from their own conversations. Counterintuitively, participants reacted more strongly with curiosity and interest rather than distress and concern. Discomfort arose mainly when inferences felt misrepresentative of the user or misaligned with expected use. Participants were also markedly less comfortable with advertisers and third-party applications using those inferences than with platform providers. These findings suggest that the acceptability of LLM inferences is governed not only by its content, but by context-sensitive norms around how they are generated, retained within the platform, and transmitted beyond it.