ActiveGlasses: Learning Manipulation with Active Vision from Ego-centric Human Demonstration
2026-04-09 • Robotics
Robotics
AI summaryⓘ
The authors created ActiveGlasses, a system that uses smart glasses with cameras to record how humans handle objects naturally with their hands. This data helps robots learn to do the same tasks without needing extra special devices or training for each robot. By using the same camera setup both during human demonstration and robot operation, their system can copy human vision and hand movements directly. They tested it on tricky tasks and found that their method works well on different robots without extra training. Overall, the authors show a way to teach robots by watching humans in a simple and scalable way.
ego-centric visionrobot manipulationactive visionzero-shot transferobject trajectoriespoint-cloud policy6-DoF perceptionhuman-robot interactiondata collectionpolicy inference
Authors
Yanwen Zou, Chenyang Shi, Wenye Yu, Han Xue, Jun Lv, Ye Pan, Chuan Wen, Cewu Lu
Abstract
Large-scale real-world robot data collection is a prerequisite for bringing robots into everyday deployment. However, existing pipelines often rely on specialized handheld devices to bridge the embodiment gap, which not only increases operator burden and limits scalability, but also makes it difficult to capture the naturally coordinated perception-manipulation behaviors of human daily interaction. This challenge calls for a more natural system that can faithfully capture human manipulation and perception behaviors while enabling zero-shot transfer to robotic platforms. We introduce ActiveGlasses, a system for learning robot manipulation from ego-centric human demonstrations with active vision. A stereo camera mounted on smart glasses serves as the sole perception device for both data collection and policy inference: the operator wears it during bare-hand demonstrations, and the same camera is mounted on a 6-DoF perception arm during deployment to reproduce human active vision. To enable zero-transfer, we extract object trajectories from demonstrations and use an object-centric point-cloud policy to jointly predict manipulation and head movement. Across several challenging tasks involving occlusion and precise interaction, ActiveGlasses achieves zero-shot transfer with active vision, consistently outperforms strong baselines under the same hardware setup, and generalizes across two robot platforms.