BPP: Long-Context Robot Imitation Learning by Focusing on Key History Frames

2026-02-16Robotics

RoboticsMachine Learning
AI summary

The authors explain that robots often need to remember past observations to complete tasks, like searching a room. However, current robot programs usually only focus on what they see right now, which makes it hard to handle tasks needing memory. They found that simply using all past information causes the robot to pick up on accidental details that don’t help in new situations. To fix this, the authors created Big Picture Policies (BPP), which uses a smart way to pick out important past moments using a vision-language model. BPP helps robots remember the most relevant parts of the past, improving task success without being confused by irrelevant details.

robot policieshistory conditioningspurious correlationsdistribution shiftvision-language modelskeyframe detectiontask-relevant eventsmanipulation tasksrolloutsgeneralization
Authors
Max Sobol Mark, Jacky Liang, Maria Attarian, Chuyuan Fu, Debidatta Dwibedi, Dhruv Shah, Aviral Kumar
Abstract
Many robot tasks require attending to the history of past observations. For example, finding an item in a room requires remembering which places have already been searched. However, the best-performing robot policies typically condition only on the current observation, limiting their applicability to such tasks. Naively conditioning on past observations often fails due to spurious correlations: policies latch onto incidental features of training histories that do not generalize to out-of-distribution trajectories upon deployment. We analyze why policies latch onto these spurious correlations and find that this problem stems from limited coverage over the space of possible histories during training, which grows exponentially with horizon. Existing regularization techniques provide inconsistent benefits across tasks, as they do not fundamentally address this coverage problem. Motivated by these findings, we propose Big Picture Policies (BPP), an approach that conditions on a minimal set of meaningful keyframes detected by a vision-language model. By projecting diverse rollouts onto a compact set of task-relevant events, BPP substantially reduces distribution shift between training and deployment, without sacrificing expressivity. We evaluate BPP on four challenging real-world manipulation tasks and three simulation tasks, all requiring history conditioning. BPP achieves 70% higher success rates than the best comparison on real-world evaluations.