NavTrust: Benchmarking Trustworthiness for Embodied Navigation

2026-03-19Robotics

RoboticsArtificial IntelligenceComputer Vision and Pattern RecognitionMachine Learning
AI summary

The authors created NavTrust, a test that checks how well navigation robots handle problems like bad camera images or confusing instructions. They looked at two types of navigation: following spoken directions and finding objects. Their tests showed that current robots often struggle when things go wrong, like blurry pictures or unclear commands. They also tried ways to help robots get better at dealing with these issues, even testing on a real robot. This work helps make navigation robots more reliable in real life.

Vision-Language Navigation (VLN)Object-Goal Navigation (OGN)RGB imagesDepth sensingInput corruptionRobustnessEmbodied navigationInstruction followingBenchmarkMobile robots
Authors
Huaide Jiang, Yash Chaudhary, Yuping Wang, Zehao Wang, Raghav Sharma, Manan Mehta, Yang Zhou, Lichao Sun, Zhiwen Fan, Zhengzhong Tu, Jiachen Li
Abstract
There are two major categories of embodied navigation: Vision-Language Navigation (VLN), where agents navigate by following natural language instructions; and Object-Goal Navigation (OGN), where agents navigate to a specified target object. However, existing work primarily evaluates model performance under nominal conditions, overlooking the potential corruptions that arise in real-world settings. To address this gap, we present NavTrust, a unified benchmark that systematically corrupts input modalities, including RGB, depth, and instructions, in realistic scenarios and evaluates their impact on navigation performance. To our best knowledge, NavTrust is the first benchmark that exposes embodied navigation agents to diverse RGB-Depth corruptions and instruction variations in a unified framework. Our extensive evaluation of seven state-of-the-art approaches reveals substantial performance degradation under realistic corruptions, which highlights critical robustness gaps and provides a roadmap toward more trustworthy embodied navigation systems. Furthermore, we systematically evaluate four distinct mitigation strategies to enhance robustness against RGB-Depth and instructions corruptions. Our base models include Uni-NaVid and ETPNav. We deployed them on a real mobile robot and observed improved robustness to corruptions. The project website is: https://navtrust.github.io.