Evaluating the False Trust engendered by LLM Explanations

2026-05-11Human-Computer Interaction

Human-Computer Interaction
AI summary

The authors studied how different types of AI explanations affect whether people can tell if an AI's answer is right or wrong. They tested common explanations like showing the AI's reasoning steps or summaries, and found these made people trust the AI more, even if it was wrong. However, when users saw explanations that presented both supporting and opposing arguments, they were better at spotting wrong answers. This means that just explaining how the AI thinks isn't enough to prevent false trust, but balanced arguments can help users judge AI reliability more accurately.

Large Language ModelsLarge Reasoning ModelsExplanation TypesReasoning TracesPost-hoc ExplanationsUser StudyFalse TrustContrastive ExplanationAI TrustworthinessHuman-AI Interaction
Authors
Vardhan Palod, Upasana Biswas, Subbarao Kambhampati
Abstract
Large Language Models (LLMs) and Large Reasoning Models (LRMs) are increasingly used for critical tasks, yet they provide no guarantees about the correctness of their solutions. Users must decide whether to trust the model's answer, aided by reasoning traces, their summaries, or post-hoc generated explanations. These reasoning traces, despite evidence that they are neither faithful representations of the model's computations nor necessarily semantically meaningful, are often interpreted as provenance explanations. It is unclear whether explanations or reasoning traces help users identify when the AI is incorrect, or whether they simply persuade users to trust the AI regardless. In this paper, we take a user-centered approach and develop an evaluation protocol to study how different explanation types affect users' ability to judge the correctness of AI-generated answers and engender false trust in the users. We conduct a between-subject user study, simulating a setting where users do not have the means to verify the solution and analyze the false trust engendered by commonly used LLM explanations - reasoning traces, their summaries and post-hoc explanations. We also test a contrastive dual explanation setting where we present arguments for and against the AI's answer. We find that reasoning traces and post-hoc explanations are persuasive but not informative: they increase user acceptance of LLM predictions regardless of their correctness. In contrast, dual explanation is the only condition that genuinely improves users' ability to distinguish correct from incorrect AI outputs.