LaST-R1: Reinforcing Action via Adaptive Physical Latent Reasoning for VLA Models
2026-04-30 • Robotics
RoboticsComputer Vision and Pattern Recognition
AI summaryⓘ
The authors developed LaST-R1, a new system that helps robots better think through physical actions before acting, by combining reasoning and trial-and-error learning. They introduced LAPO, an approach that improves how the robot plans and executes actions by optimizing the reasoning steps and movements together. Their method also adapts how much reasoning the robot does depending on the task difficulty. Tests showed that LaST-R1 performs very well on benchmarks and real tasks, quickly learning and generalizing to different settings. Overall, the authors' work helps robots act more intelligently in complex environments.
Vision-Language-Action modelsChain-of-Thought reasoningReinforcement LearningLatent reasoningRobotic manipulationPolicy optimizationPhysical dynamicsTrial-and-error learningGeneralizationBenchmark evaluation
Authors
Hao Chen, Jiaming Liu, Zhonghao Yan, Nuowei Han, Renrui Zhang, Chenyang Gu, Jialin Gao, Ziyu Guo, Siyuan Qian, Yinxi Wang, Peng Jia, Chi-Wing Fu, Shanghang Zhang, Pheng-Ann Heng
Abstract
Vision-Language-Action (VLA) models have increasingly incorporated reasoning mechanisms for complex robotic manipulation. However, existing approaches share a critical limitation: whether employing explicit linguistic reasoning that suffers from latency and discretization, or utilizing more expressive continuous latent reasoning, they are predominantly confined to static imitation learning that limits adaptability and generalization. While online reinforcement learning (RL) has been introduced to VLAs to enable trial-and-error exploration, current methods exclusively optimize the vanilla action space, bypassing the underlying physical reasoning process. In this paper, we present \textbf{LaST-R1}, a unified VLA framework that integrates latent Chain-of-Thought (CoT) reasoning over physical dynamics prior to action execution, along with a tailored RL post-training paradigm. Specifically, we propose \textbf{Latent-to-Action Policy Optimization (LAPO)}, a novel RL algorithm that jointly optimizes the latent reasoning process and the action generation. By bridging reasoning and control, LAPO improves the representation of physical world modeling and enhances robustness in interactive environments. Furthermore, an \textbf{adaptive latent CoT mechanism} is introduced to allow the policy to dynamically adjust its reasoning horizon based on environment complexity. Extensive experiments show that LaST-R1 achieves a near-perfect 99.8\% average success rate on the LIBERO benchmark with only one-shot supervised warm-up, significantly improving convergence speed and performance over prior state-of-the-art methods. In real-world deployments, LAPO post-training yields up to a 44\% improvement over the initial warm-up policy across four complex tasks, including both single-arm and dual-arm settings. Finally, LaST-R1 demonstrates strong generalization across simulated and real-world environments.