Safe-SAGE: Social-Semantic Adaptive Guidance for Safe Engagement through Laplace-Modulated Poisson Safety Functions
2026-03-05 • Robotics
Robotics
AI summaryⓘ
The authors describe Safe-SAGE, a method that helps robots move safely by recognizing what kinds of obstacles are around them, instead of treating all obstacles the same way. They combine different sensors and camera data to keep track of objects even when they go out of view. Their approach uses mathematical tools to adjust how the robot moves, making sure it keeps a safe distance depending on the type of obstacle and follows social rules when passing others. This helps legged robots navigate complex, changing environments more intelligently while still ensuring safety.
control barrier functionsPoisson safety functionLaplace guidance fieldinstance segmentationmulti-sensor fusionpersistent object trackingmodel predictive controllegged robotssafety-critical controlsemantic understanding
Authors
Lizhi Yang, Ryan M. Bena, Meg Wilkinson, Gilbert Bahati, Andy Navarro Brenes, Ryan K. Cosner, Aaron D. Ames
Abstract
Traditional safety-critical control methods, such as control barrier functions, suffer from semantic blindness, exhibiting the same behavior around obstacles regardless of contextual significance. This limitation leads to the uniform treatment of all obstacles, despite their differing semantic meanings. We present Safe-SAGE (Social-Semantic Adaptive Guidance for Safe Engagement), a unified framework that bridges the gap between high-level semantic understanding and low-level safety-critical control through a Poisson safety function (PSF) modulated using a Laplace guidance field. Our approach perceives the environment by fusing multi-sensor point clouds with vision-based instance segmentation and persistent object tracking to maintain up-to-date semantics beyond the camera's field of view. A multi-layer safety filter is then used to modulate system inputs to achieve safe navigation using this semantic understanding of the environment. This safety filter consists of both a model predictive control layer and a control barrier function layer. Both layers utilize the PSF and flux modulation of the guidance field to introduce varying levels of conservatism and multi-agent passing norms for different obstacles in the environment. Our framework enables legged robots to navigate semantically rich, dynamic environments with context-dependent safety margins while maintaining rigorous safety guarantees.