CapVector: Learning Transferable Capability Vectors in Parametric Space for Vision-Language-Action Models

2026-05-11Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionRobotics
AI summary

The authors propose a new method to improve pretrained vision-language-action models during fine-tuning without adding much extra computation. Instead of combining different training goals all at once, they train two separate models on a small set of tasks to capture two aspects: general skills and task-specific actions. By comparing these models, they extract 'capability vectors' that enhance the original model when merged. This approach achieves similar improvements to more complex methods but with fewer computation costs and works well across different models and new tasks right away.

pretrained modelsvision-language-action modelssupervised fine-tuning (SFT)auxiliary training objectivesparameter spacecapability vectorsorthogonal regularizationmodel generalizationtask-specific adaptation
Authors
Wenxuan Song, Han Zhao, Fuhao Li, Ziyang Zhou, Xi Wang, Jing Lyu, Pengxiang Ding, Yan Wang, Donglin Wang, Haoang Li
Abstract
This paper proposes a novel approach to address the challenge that pretrained VLA models often fail to effectively improve performance and reduce adaptation costs during standard supervised finetuning (SFT). Some advanced finetuning methods with auxiliary training objectives can improve performance and reduce the number of convergence steps. However, they typically incur significant computational overhead due to the additional losses from auxiliary objectives. To simultaneously achieve the enhanced capabilities of auxiliary training with the simplicity of standard SFT, we decouple the two objectives of auxiliary-objective SFT within the parameter space, namely, enhancing general capabilities and fitting task-specific action distributions. To deliver the goal, we only need to train the model to converge on a small-scale task set using two distinct training strategies, resulting in two finetuned models. The parameters' difference between the two models can then be interpreted as capability vectors provided by auxiliary objectives. These vectors are then merged with pretrained parameters to form a capability-enhanced meta model. Moreover, when standard SFT is augmented with a lightweight orthogonal regularization loss, the merged model attains performance comparable to auxiliary finetuned baselines with reduced computational overhead. Internal and external experiments demonstrate that our capability vectors (1) are effective and versatile across diverse models, (2) can generalize to novel environments and embodiments out of the box.