PriorVLA: Prior-Preserving Adaptation for Vision-Language-Action Models

2026-05-11Robotics

Robotics
AI summary

The authors present PriorVLA, a new method to adapt large Vision-Language-Action models for robot tasks without losing their general knowledge. Instead of fully retraining these models, PriorVLA keeps a frozen expert with the original knowledge and trains a smaller adaptation module to learn new tasks. This approach updates fewer parameters and works better, especially when dealing with new or limited data. Their tests show PriorVLA surpasses previous methods in both simulated benchmarks and real-world robot tasks, handling unfamiliar situations more successfully.

Vision-Language-Action modelspretrainingfine-tuningrobot manipulationparameter-efficient adaptationout-of-distribution generalizationfew-shot learningexpert modelsRoboTwinLIBERO dataset
Authors
Xinyu Guo, Bin Xie, Wei Chai, Xianchi Deng, Tiancai Wang, Zhengxing Wu, Xingyu Chen
Abstract
Large-scale pretraining has made Vision-Language-Action (VLA) models promising foundations for generalist robot manipulation, yet adapting them to downstream tasks remains necessary. However, the common practice of full fine-tuning treats pretraining as initialization and can shift broad priors toward narrow training-distribution patterns. We propose PriorVLA, a novel framework that preserves pretrained priors and learns to leverage them for effective adaptation. PriorVLA keeps a frozen Prior Expert as a read-only prior source and trains an Adaptation Expert for downstream specialization. Expert Queries capture scene priors from the pretrained VLM and motor priors from the Prior Expert, integrating both into the Adaptation Expert to guide adaptation. Together, PriorVLA updates only 25% of the parameters updated by full fine-tuning. Across RoboTwin 2.0, LIBERO, and real-world tasks, PriorVLA achieves stronger overall performance than full fine-tuning and state-of-the-art VLA baselines, with the largest gains under out-of-distribution (OOD) and few-shot settings. PriorVLA improves over pi0.5 by 11 points on RoboTwin 2.0-Hard and achieves 99.1% average success on LIBERO. Across eight real-world tasks and two embodiments, PriorVLA reaches 81% in-distribution (ID) and 57% OOD success with standard data. With only 10 demonstrations per task, PriorVLA reaches 48% ID and 32% OOD success, surpassing pi0.5 by 24 and 22 points, respectively.