HAGE: Harnessing Agentic Memory via RL-Driven Weighted Graph Evolution
2026-05-11 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors propose HAGE, a new way for large language models to remember information by organizing memory as a graph with weighted connections that change depending on the question asked. Instead of just looking up facts, their method moves through this graph step-by-step, focusing on the most relevant relationships for better answers. They also use reinforcement learning to improve how the model decides which paths to follow, leading to better reasoning over long chains of information. Their experiments show HAGE works better and more efficiently than previous memory systems.
large language modelsmemory retrievalrelational graphsgraph traversaledge embeddingsreinforcement learningquery-conditioned retrievalsemantic similaritymulti-relational graphsagentic systems
Authors
Dongming Jiang, Yi Li, Guanpeng Li, Qiannan Li, Bingzhe Li
Abstract
Memory retrieval in agentic large language model (LLM) systems is often treated as a static lookup problem, relying on flat vector search or fixed binary relational graphs. However, fixed graph structures cannot capture the varying strength, confidence, and query-dependent relevance of relationships between events. In this paper, we propose HAGE, a weighted multi-relational memory framework that reconceptualizes retrieval as sequential, query-conditioned traversal over a unified relational memory graph. Memory is organized as relation-specific graph views over shared memory nodes, where each edge is associated with a trainable relation feature vector encoding multiple relational signals. Given a query, an LLM-based classifier identifies the relational intent, and a routing network dynamically modulates the corresponding dimensions of the edge embedding. Traversal scores are computed via a learned combination of semantic similarity and these query-conditioned edge representations. This allows memory traversal to prioritize high-utility relational paths while softly suppressing noisy or weakly relevant connections. Beyond adaptive traversal, HAGE further introduces a reinforcement learning-based training framework that jointly optimizes routing behavior and edge representations using downstream tasks. Finally, empirical results demonstrate improved long-horizon reasoning accuracy and a favorable accuracy-efficiency trade-off compared to state-of-the-art agentic memory systems. Our code is available at https://github.com/FredJiang0324/HAGE_MVPReview.