Long-Horizon Manipulation via Trace-Conditioned VLA Planning
2026-04-23 • Robotics
Robotics
AI summaryⓘ
The authors developed LoHo-Manip, a system to help robots follow complex, multi-step instructions more reliably. Their approach breaks down a big task into smaller pieces by using a manager that plans what to do next based on what the robot sees, and an executor that follows detailed visual cues to act locally. This setup allows the robot to adjust its plan if something goes wrong, without needing special recovery tricks. The authors tested their system on simulations and a real robot, showing better performance on long tasks and adaptability to new situations.
vision-language-action (VLA)long-horizon manipulationtask planningreceding horizonvisual tracekeypoint trajectoryrobot controlclosed-loop feedbackmulti-step instruction followingFranka robot
Authors
Isabella Liu, An-Chieh Cheng, Rui Yan, Geng Chen, Ri-Zhao Qiu, Xueyan Zou, Sha Yi, Hongxu Yin, Xiaolong Wang, Sifei Liu
Abstract
Long-horizon manipulation remains challenging for vision-language-action (VLA) policies: real tasks are multi-step, progress-dependent, and brittle to compounding execution errors. We present LoHo-Manip, a modular framework that scales short-horizon VLA execution to long-horizon instruction following via a dedicated task-management VLM. The manager is decoupled from the executor and is invoked in a receding-horizon manner: given the current observation, it predicts a progress-aware remaining plan that combines (i) a subtask sequence with an explicit done + remaining split as lightweight language memory, and (ii) a visual trace -- a compact 2D keypoint trajectory prompt specifying where to go and what to approach next. The executor VLA is adapted to condition on the rendered trace, thereby turning long-horizon decision-making into repeated local control by following the trace. Crucially, predicting the remaining plan at each step yields an implicit closed loop: failed steps persist in subsequent outputs, and traces update accordingly, enabling automatic continuation and replanning without hand-crafted recovery logic or brittle visual-history buffers. Extensive experiments spanning embodied planning, long-horizon reasoning, trajectory prediction, and end-to-end manipulation in simulation and on a real Franka robot demonstrate strong gains in long-horizon success, robustness, and out-of-distribution generalization. Project page: https://www.liuisabella.com/LoHoManip