Leveraging LLM Parametric Knowledge for Fact Checking without Retrieval

2026-03-05Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors focus on verifying whether statements made by AI models are true without looking up information from the internet or other sources. They test how well different methods work for checking facts directly within the AI, especially for rare facts, different languages, and long texts. Their experiments show that approaches using the AI's internal reasoning are better than those relying on output probabilities. They propose a new method called INTRA that uses these internal signals and performs best across many tests. This work suggests fact-checking inside AI models can help make them more trustworthy and efficient.

Large Language ModelsFact-checkingRetrievalInternal representationsLogit-based methodsMultilingualityGeneralizationLong-tail knowledgeNatural language claimsAI trustworthiness
Authors
Artem Vazhentsev, Maria Marina, Daniil Moskovskiy, Sergey Pletenev, Mikhail Seleznyov, Mikhail Salnikov, Elena Tutubalina, Vasily Konovalov, Irina Nikishina, Alexander Panchenko, Viktor Moskvoretskii
Abstract
Trustworthiness is a core research challenge for agentic AI systems built on Large Language Models (LLMs). To enhance trust, natural language claims from diverse sources, including human-written text, web content, and model outputs, are commonly checked for factuality by retrieving external knowledge and using an LLM to verify the faithfulness of claims to the retrieved evidence. As a result, such methods are constrained by retrieval errors and external data availability, while leaving the models intrinsic fact-verification capabilities largely unused. We propose the task of fact-checking without retrieval, focusing on the verification of arbitrary natural language claims, independent of their source. To study this setting, we introduce a comprehensive evaluation framework focused on generalization, testing robustness to (i) long-tail knowledge, (ii) variation in claim sources, (iii) multilinguality, and (iv) long-form generation. Across 9 datasets, 18 methods and 3 models, our experiments indicate that logit-based approaches often underperform compared to those that leverage internal model representations. Building on this finding, we introduce INTRA, a method that exploits interactions between internal representations and achieves state-of-the-art performance with strong generalization. More broadly, our work establishes fact-checking without retrieval as a promising research direction that can complement retrieval-based frameworks, improve scalability, and enable the use of such systems as reward signals during training or as components integrated into the generation process.