ClickAIXR: On-Device Multimodal Vision-Language Interaction with Real-World Objects in Extended Reality

2026-04-06Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionGraphicsHuman-Computer Interaction
AI summary

The authors created ClickAIXR, a system that runs completely on a device like augmented reality glasses to let users select real-world objects by clicking on them. Unlike other systems that use cloud AI or gaze tracking, their system processes the object image locally and answers questions about it in text or speech. This makes interactions clearer and protects privacy because no data is sent outside the device. They tested ClickAIXR with real users and found it worked well with reasonable delays and good satisfaction.

Extended Reality (XR)Vision-Language Model (VLM)On-device AIObject SelectionNatural Language ProcessingUser StudyLatencyMagic Leap SDKPrivacyMultimodal Interaction
Authors
Dawar Khan, Alexandre Kouyoumdjian, Xinyu Liu, Omar Mena, Dominik Engel, Ivan Viola
Abstract
We present ClickAIXR, a novel on-device framework for multimodal vision-language interaction with objects in extended reality (XR). Unlike prior systems that rely on cloud-based AI (e.g., ChatGPT) or gaze-based selection (e.g., GazePointAR), ClickAIXR integrates an on-device vision-language model (VLM) with a controller-based object selection paradigm, enabling users to precisely click on real-world objects in XR. Once selected, the object image is processed locally by the VLM to answer natural language questions through both text and speech. This object-centered interaction reduces ambiguity inherent in gaze- or voice-only interfaces and improves transparency by performing all inference on-device, addressing concerns around privacy and latency. We implemented ClickAIXR in the Magic Leap SDK (C API) with ONNX-based local VLM inference. We conducted a user study comparing ClickAIXR with Gemini 2.5 Flash and ChatGPT 5, evaluating usability, trust, and user satisfaction. Results show that latency is moderate and user experience is acceptable. Our findings demonstrate the potential of click-based object selection combined with on-device AI to advance trustworthy, privacy-preserving XR interactions. The source code and supplementary materials are available at: nanovis.org/ClickAIXR.html