StarVLA-$α$: Reducing Complexity in Vision-Language-Action Systems

2026-04-13Robotics

RoboticsArtificial IntelligenceComputer Vision and Pattern Recognition
AI summary

The authors introduce StarVLA-α, a simple but effective model for robots that see, understand language, and act. They designed it to be less complicated to better study how different design choices affect performance. Testing it on several robot tasks showed that their straightforward approach does very well, even beating a previous model by 20% in a real-world challenge. The authors suggest that strong core technology with minimal tweaks can be enough for good robot performance. They will share their code to help others build on this work.

Vision-Language-Action modelsrobotic agentsmodel architecturepretrainingbenchmarkingmulti-benchmark trainingrobot embodimentrobotic manipulationgeneralist modelRoboChallenge
Authors
Jinhui Ye, Ning Gao, Senqiao Yang, Jinliang Zheng, Zixuan Wang, Yuxin Chen, Pengguang Chen, Yilun Chen, Shu Liu, Jiaya Jia
Abstract
Vision-Language-Action (VLA) models have recently emerged as a promising paradigm for building general-purpose robotic agents. However, the VLA landscape remains highly fragmented and complex: as existing approaches vary substantially in architectures, training data, embodiment configurations, and benchmark-specific engineering. In this work, we introduce StarVLA-$α$, a simple yet strong baseline designed to study VLA design choices under controlled conditions. StarVLA-$α$ deliberately minimizes architectural and pipeline complexity to reduce experimental confounders and enable systematic analysis. Specifically, we re-evaluate several key design axes, including action modeling strategies, robot-specific pretraining, and interface engineering. Across unified multi-benchmark training on LIBERO, SimplerEnv, RoboTwin, and RoboCasa, the same simple baseline remains highly competitive, indicating that a strong VLM backbone combined with minimal design is already sufficient to achieve strong performance without relying on additional architectural complexity or engineering tricks. Notably, our single generalist model outperforms $π_{0.5}$ by 20\% on the public real-world RoboChallenge benchmark. We expect StarVLA-$α$ to serve as a solid starting point for future research in the VLA regime. Code will be released at https://github.com/starVLA/starVLA.