Observing and Controlling Features in Vision-Language-Action Models

2026-03-05Robotics

Robotics
AI summary

The authors look at Vision-Language-Action Models (VLAs), which combine seeing, understanding language, and acting, like robots do. They focus on how to understand and control what these models are 'thinking' inside by spotting and adjusting specific internal features using simple tools. By doing small, targeted changes, they can guide a robot's behavior in real-time without retraining it. Their experiments show that VLAs have interpretable inner workings that can be steered quickly to match user goals or tasks.

Vision-Language-Action ModelsLarge Language ModelsTransformerDiffusion modelsMechanistic interpretabilityLinear classifierOptimal controlInternal representationsClosed-loop controlOnline adaptation
Authors
Hugo Buurmeijer, Carmen Amo Alonso, Aiden Swann, Marco Pavone
Abstract
Vision-Language-Action Models (VLAs) have shown remarkable progress towards embodied intelligence. While their architecture partially resembles that of Large Language Models (LLMs), VLAs exhibit higher complexity due to their multi-modal inputs/outputs and often hybrid nature of transformer and diffusion heads. This is part of the reason why insights from mechanistic interpretability in LLMs, which explain how the internal model representations relate to their output behavior, do not trivially transfer to VLA counterparts. In this work, we propose to close this gap by introducing and analyzing two main concepts: feature-observability and feature-controllability. In particular, we first study features that are linearly encoded in representation space, and show how they can be observed by means of a linear classifier. Then, we use a minimal linear intervention grounded in optimal control to accurately place internal representations and steer the VLA's output towards a desired region. Our results show that targeted, lightweight interventions can reliably steer a robot's behavior while preserving closed-loop capabilities. We demonstrate on different VLA architectures ($π_{0.5}$ and OpenVLA) through simulation experiments that VLAs possess interpretable internal structure amenable to online adaptation without fine-tuning, enabling real-time alignment with user preferences and task requirements.