Vega: Learning to Drive with Natural Language Instructions

2026-03-26Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceRobotics
AI summary

The authors created a big driving dataset called InstructScene that includes many different driving instructions linked to actual driving paths. They developed a model named Vega that can understand pictures, read instructions, predict what will happen next, and plan driving actions all together. Vega uses attention to mix information from images and language and special layers to handle each type better. Their tests show that Vega plans driving routes well and follows instructions closely, moving toward smarter, more personalized self-driving cars.

vision-language modelsautonomous drivinginstruction followingdataset annotationautoregressive modelingdiffusion modelsmulti-modal attentiontrajectory planningscene understanding
Authors
Sicheng Zuo, Yuxuan Li, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu
Abstract
Vision-language-action models have reshaped autonomous driving to incorporate languages into the decision-making process. However, most existing pipelines only utilize the language modality for scene descriptions or reasoning and lack the flexibility to follow diverse user instructions for personalized driving. To address this, we first construct a large-scale driving dataset (InstructScene) containing around 100,000 scenes annotated with diverse driving instructions with the corresponding trajectories. We then propose a unified Vision-Language-World-Action model, Vega, for instruction-based generation and planning. We employ the autoregressive paradigm to process visual inputs (vision) and language instructions (language) and the diffusion paradigm to generate future predictions (world modeling) and trajectories (action). We perform joint attention to enable interactions between the modalities and use individual projection layers for different modalities for more capabilities. Extensive experiments demonstrate that our method not only achieves superior planning performance but also exhibits strong instruction-following abilities, paving the way for more intelligent and personalized driving systems.