Superintelligent Retrieval Agent: The Next Frontier of Information Retrieval
2026-05-07 • Information Retrieval
Information RetrievalArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors introduce SIRA, a new method to improve how computer agents search large knowledge bases. Instead of doing many back-and-forth searches like a beginner, SIRA makes one smart search that predicts the best words to find the right information quickly. It uses a language model to add helpful words to documents beforehand and to guess missing words in the query, then combines everything into one effective search step. Tests show SIRA works better and faster than other common search methods without needing extra training. This makes search both simpler and more accurate.
Retrieval-augmented agentsLarge language models (LLMs)Corpus-discriminative retrievalBM25Query expansionDocument-frequency statisticsDense retrieversBEIR benchmarksMulti-round searchLexical retrieval
Authors
Zeyu Yang, Qi Ma, Jason Chen, Anshumali Shrivastava
Abstract
Retrieval-augmented agents are increasingly the interface to large organizational knowledge bases, yet most still treat retrieval as a black box: they issue exploratory queries, inspect returned snippets, and iteratively reformulate until useful evidence emerges. This approach resembles how a newcomer searches an unfamiliar database rather than how an expert navigates it with strong priors about terminology and likely evidence, and results in unnecessary retrieval rounds, increased latency, and poor recall. We introduce \textit{SuperIntelligent Retrieval Agent} (SIRA), which defines \emph{superintelligence} in retrieval as the ability to compress multi-round exploratory search into a single corpus-discriminative retrieval action. SIRA does not merely ask what terms are relevant to the query; it asks which terms are likely to separate the desired evidence from corpus-level confusers. On the corpus side, an LLM enriches each document offline with missing search vocabulary; on the query side, it predicts evidence vocabulary omitted by the query; and document-frequency statistics as a tool call to filter proposed terms that are absent, overly common, or unlikely to create retrieval margin. The final retrieval step is a single weighted BM25 call combining the original query with the validated expansion. Across ten BEIR benchmarks and downstream question-answering tasks, SIRA achieves the significantly superior performance outperforming dense retrievers and state-of-the-art multi-round agentic baselines, demonstrating that one well-formed lexical query, guided by LLM cognition and lightweight corpus statistics, can exceed substantially more expensive multi-round search while remaining interpretable, training-free, and efficient.