Privileged Foresight Distillation: Zero-Cost Future Correction for World Action Models
2026-04-28 • Robotics
Robotics
AI summaryⓘ
The authors study models that learn to predict both future video frames and actions but notice that predicting the future may just help training rather than be needed at testing. They argue that knowing future information actually helps correct action predictions in a special way, which they call privileged foresight. To capture this benefit without needing future video during use, they develop a method called Privileged Foresight Distillation, transferring this corrective knowledge from a teacher model to a simpler student model. Their experiments show this approach improves action prediction without extra delay or needing future video when running the model.
world action modelsfuture predictionaction denoisingprivileged informationknowledge distillationattention maskvideo tokensmanipulation benchmarksvisual backbonemodel regularization
Authors
Pengcheng Fang, Hongli Chen, Xiaohao Cai
Abstract
World action models jointly predict future video and action during training, raising an open question about what role the future-prediction branch actually plays. A recent finding shows that this branch can be removed at inference with little to no loss on common manipulation benchmarks, suggesting that future information may act merely as a regularizer on the shared visual backbone. We propose instead that joint training induces an action-conditioned correction that privileged future observations impose on action denoising, and that current-only policies capture this correction only partially. Making the account precise, we formulate privileged foresight as a residual in the action-denoising direction -- the difference between what a model predicts given the true future and what it predicts given only the current frame -- and introduce \emph{Privileged Foresight Distillation (PFD)}, which transfers this residual from a training-time teacher into a small adapter on a current-only student. The teacher and student share the same backbone and differ only in the attention mask over video tokens; future video is never generated at inference. Controlled experiments verify that this gain reflects a genuine future-conditioned correction rather than a side effect of capacity or regularization. Empirically, PFD achieves consistent improvements on LIBERO and RoboTwin manipulation benchmarks while preserving the current-only inference interface at negligible added latency. This view reframes the role of future information in world action models: not as a target to predict, nor as a regularizer to absorb, but as a compressible correction to be distilled.