Sparse Autoencoders Reveal Interpretable and Steerable Features in VLA Models
2026-03-19 • Robotics
Robotics
AI summaryⓘ
The authors studied how Vision-Language-Action (VLA) models for robot control work inside by analyzing their internal patterns using Sparse Autoencoders (SAEs). They found that most learned features are specific to past training examples, but some capture general, meaningful motion and task elements that can be reused in new situations. They created a way to separate these general features from memorized ones and showed that manipulating these features can change robot behavior in predictable ways. Their work suggests that larger, diverse datasets help models learn more general skills, while small fine-tuning datasets lead to more memorization.
Vision-Language-Action (VLA) modelsSparse Autoencoders (SAEs)mechanistic interpretabilityrobot manipulationfeature steeringgeneralizationmemorizationmotion primitivesfine-tuningrobot behavior
Authors
Aiden Swann, Lachlain McGranahan, Hugo Buurmeijer, Monroe Kennedy, Mac Schwager
Abstract
Vision-Language-Action (VLA) models have emerged as a promising approach for general-purpose robot manipulation. However, their generalization is inconsistent: while these models can perform impressively in some settings, fine-tuned variants often fail on novel objects, scenes, and instructions. We apply mechanistic interpretability techniques to better understand the inner workings of VLA models. To probe internal representations, we train Sparse Autoencoders (SAEs) on hidden layer activations of the VLA. SAEs learn a sparse dictionary whose features act as a compact, interpretable basis for the model's computation. We find that the large majority of extracted SAE features correspond to memorized sequences from specific training demonstrations. However, some features correspond to interpretable, general, and steerable motion primitives and semantic properties, offering a promising glimpse toward VLA generalizability. We propose a metric to categorize features according to whether they represent generalizable transferable primitives or episode-specific memorization. We validate these findings through steering experiments on the LIBERO benchmark. We show that individual SAE features causally influence robot behavior. Steering general features induces behaviors consistent with their semantic meaning and can be applied across tasks and scenes. This work provides the first mechanistic evidence that VLAs can learn generalizable features across tasks and scenes. We observe that supervised fine-tuning on small robotics datasets disproportionately amplifies memorization. In contrast, training on larger, more diverse datasets (e.g., DROID) or using knowledge insulation promotes more general features. We provide an open-source codebase and user-friendly interface for activation collection, SAE training, and feature steering. Our project page is located at http://drvla.github.io