Plan in Sandbox, Navigate in Open Worlds: Learning Physics-Grounded Abstracted Experience for Embodied Navigation
2026-05-11 • Robotics
Robotics
AI summaryⓘ
The authors created a system called SAGE to help robots learn how to navigate by practicing in simple, physics-based environments instead of trying to learn from complex, realistic simulations. This method mimics how humans imagine and plan actions before doing them. Their approach involves building these simple worlds, training the robots using a special learning technique, and then applying the learned skills to real-world navigation. Their experiments show that SAGE improves robot navigation success rates and can work on actual indoor robots. Overall, the authors suggest that learning with abstracted, physics-grounded environments helps robots navigate better.
Vision-Language ModelsEmbodied NavigationPhysics-grounded SimulationReinforcement LearningSemantic AbstractionPolicy TransferAsymmetric Adaptive ClippingRobot ControlMental Simulation
Authors
Zhixuan Shen, Jiawei Du, Ziyu Guo, Han Luo, Lilan Peng, Joey Tianyi Zhou, Haonan Luo, Tianrui Li
Abstract
Vision-Language Models (VLMs) have demonstrated exceptional general reasoning capabilities. However, their performance in embodied navigation remains hindered by a scarcity of aligned open-world vision and robot control data. Despite simulators providing a cost-effective alternative for data collection, the inherent reliance on photorealistic simulations often limits the transferability of learned policies. To this end, we propose \textit{\textbf{S}andbox-\textbf{A}bstracted \textbf{G}rounded \textbf{E}xperience} (\textbf{\textit{SAGE}}), a framework that enables agents to learn within a physics-grounded semantic abstraction rather than a photorealistic simulation, mimicking the human capacity for mental simulation where plans are rehearsed in simplified physics abstractions before execution. \textit{SAGE} system operates via three synergistic phases: (1) \textit{Genesis}: constructing diverse, physics-constrained semantic environments to bootstrap experience; (2) \textit{Evolution}: distilling experiences through Reinforcement Learning (RL), utilizing a novel asymmetric adaptive clipping mechanism to stabilize updates; (3) \textit{Navigation}: bridging the abstract policy to open-world control. We demonstrate that \textit{SAGE} significantly improves planner-assisted embodied navigation, achieving a 53.21\% LLM-Match Success Rate on A-EQA (+9.7\% over baseline), while showing encouraging transfer to physical indoor robot deployment.