Mask World Model: Predicting What Matters for Robust Robot Policy Learning
2026-04-21 • Robotics
Robotics
AI summaryⓘ
The authors show that teaching robots to predict simple shapes or masks in videos, instead of full-color images, helps them focus on important physical movements and ignore unnecessary visual details like background or lighting changes. They created a system called Mask World Model (MWM) that uses this idea and combines it with smart decision-making tools to better control robots. Tests showed that MWM works better than older methods that rely on predicting full images, and it also handles real-world challenges more reliably. This approach helps robots learn policies that generalize well and stay robust even when some visual information is missing.
world modelsvideo diffusionsemantic masksrobot policy learninggeometric bottleneckphysical dynamicscontact relationsend-to-end controlgeneralizationrobustness
Authors
Yunfan Lou, Xiaowei Chi, Xiaojie Zhang, Zezhong Qian, Chengxuan Li, Rongyu Zhang, Yaoxu Lyu, Guoyu Song, Chuyao Fu, Haoxuan Xu, Pengwei Wang, Shanghang Zhang
Abstract
World models derived from large-scale video generative pre-training have emerged as a promising paradigm for generalist robot policy learning. However, standard approaches often focus on high-fidelity RGB video prediction, this can result in overfitting to irrelevant factors, such as dynamic backgrounds and illumination changes. These distractions reduce the model's ability to generalize, ultimately leading to unreliable and fragile control policies. To address this, we introduce the Mask World Model (MWM), which leverages video diffusion architectures to predict the evolution of semantic masks instead of pixels. This shift imposes a geometric information bottleneck, forcing the model to capture essential physical dynamics and contact relations while filtering out visual noise. We seamlessly integrate this mask dynamics backbone with a diffusion-based policy head to enable robust end-to-end control. Extensive evaluations demonstrate the superiority of MWM on the LIBERO and RLBench simulation benchmarks, significantly outperforming the state-of-the-art RGB-based world models. Furthermore, real-world experiments and robustness evaluation (via random token pruning) reveal that MWM exhibits superior generalization capabilities and robust resilience to texture information loss.