An Agent-Oriented Pluggable Experience-RAG Skill for Experience-Driven Retrieval Strategy Orchestration
2026-05-05 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors argue that using the same retrieval method for all types of questions is not ideal because different tasks need different search styles. They created Experience-RAG Skill, a system that sits between the main AI agent and multiple search tools, choosing the best search approach based on the current question and past experiences. Their method improved performance on various benchmark datasets compared to using just one search tool. This work shows that making search strategy selection a separate, flexible part of an AI system can be beneficial.
retrieval-augmented generationquestion answeringmulti-hop reasoningscientific verificationretrieval strategyexperience memorynDCG@10BeIR benchmarkretriever orchestration
Authors
Dutao Zhang, Tian Liao
Abstract
Retrieval-augmented generation systems often assume that one fixed retrieval pipeline is sufficient across heterogeneous tasks, yet factoid question answering, multi-hop reasoning, and scientific verification exhibit different retrieval preferences. We present Experience-RAG Skill, an agent-oriented pluggable retrieval orchestration layer positioned between the agent and the retriever pool. The proposed skill analyzes the current scene, consults an experience memory, selects an appropriate retrieval strategy, and returns structured evidence to the agent. Under a fixed candidate pool, Experience-RAG Skill achieves an overall nDCG@10 of 0.8924 on BeIR/nq, BeIR/hotpotqa, and BeIR/scifact, outperforming fixed single-retriever baselines and remaining competitive with Adaptive-RAG-style routing. The results suggest that retrieval strategy selection can be productively encapsulated as a reusable agent skill rather than being hard-coded in the upper workflow.