Recursive Agent Optimization

2026-05-07Machine Learning

Machine LearningArtificial IntelligenceComputation and LanguageMultiagent Systems
AI summary

The authors present Recursive Agent Optimization (RAO), a method for teaching AI agents to break down problems by creating smaller versions of themselves to handle subtasks. This way, the agents can solve bigger and harder problems than they normally could. RAO trains these agents to know when and how to pass tasks along, making them more efficient and able to work beyond typical limits. The authors show that this approach helps agents learn better and save time compared to working alone.

reinforcement learningrecursive agentsdivide-and-conquerinference-time scalingtask delegationgeneralizationcontext windowtraining efficiencymulti-agent systems
Authors
Apurva Gandhi, Satyaki Chakraborty, Xiangjun Wang, Aviral Kumar, Graham Neubig
Abstract
We introduce Recursive Agent Optimization (RAO), a reinforcement learning approach for training recursive agents: agents that can spawn and delegate sub-tasks to new instantiations of themselves recursively. Recursive agents implement an inference-time scaling algorithm that naturally allows agents to scale to longer contexts and generalize to more difficult problems via divide-and-conquer. RAO provides a method to train models to best take advantage of such recursive inference, teaching agents when and how to delegate and communicate. We find that recursive agents trained in this way enjoy better training efficiency, can scale to tasks that go beyond the model's context window, generalize to tasks much harder than the ones the agent was trained on, and can enjoy reduced wall-clock time compared to single-agent systems.