AI summaryⓘ
The authors studied ways to make language models more reliable by checking facts using extra information (called retrieval-augmented generation) and a method called conformal factuality, which tries to statistically guarantee accuracy. They found that while conformal factuality can filter out wrong facts, it often results in less useful or empty answers and struggles when the data changes or includes confusing info. They also discovered simpler methods that check factuality using less computational power while performing well. Overall, the authors highlight important trade-offs between correctness and usefulness and suggest that new methods are needed for reliable and efficient fact-checking in language models.
Large Language ModelsHallucinationRetrieval-Augmented GenerationConformal FactualityStatistical ReliabilityCalibrationDistribution ShiftEntailment VerificationFactuality Informativeness Trade-offComputational Efficiency
Authors
Yi Chen, Daiwei Chen, Sukrut Madhav Chikodikar, Caitlyn Heqi Yin, Ramya Korlakai Vinayak
Abstract
Large language models (LLMs) frequently hallucinate, limiting their reliability in knowledge-intensive applications. Retrieval-augmented generation (RAG) and conformal factuality have emerged as potential ways to address this limitation. While RAG aims to ground responses in retrieved evidence, it provides no statistical guarantee that the final output is correct. Conformal factuality filtering offers distribution-free statistical reliability by scoring and filtering atomic claims using a threshold calibrated on held-out data, however, the informativeness of the final output is not guaranteed. We systematically analyze the reliability and usefulness of conformal factuality for RAG-based LLMs across generation, scoring, calibration, robustness, and efficiency. We propose novel informativeness-aware metrics that better reflect task utility under conformal filtering. Across three benchmarks and multiple model families, we find that (i) conformal filtering suffers from low usefulness at high factuality levels due to vacuous outputs, (ii) conformal factuality guarantee is not robust to distribution shifts and distractors, highlighting the limitation that requires calibration data to closely match deployment conditions, and (iii) lightweight entailment-based verifiers match or outperform LLM-based model confidence scorers while requiring over $100\times$ fewer FLOPs. Overall, our results expose factuality-informativeness trade-offs and fragility of conformal filtering framework under distribution shifts and distractors, highlighting the need for new approaches for reliability with robustness and usefulness as key metrics, and provide actionable guidance for building RAG pipelines that are both reliable and computationally efficient.