Video Models Reason Early: Exploiting Plan Commitment for Maze Solving

2026-03-31Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors studied how video diffusion models solve mazes by looking closely at their planning process. They found that these models decide on a rough path early on and then just add details later, and that maze difficulty depends more on how long the path is rather than how many obstacles there are. They created a method called ChEaP that uses this insight to focus on promising plans early and link them together, improving maze-solving performance. Their work shows video models can reason better than we thought when used thoughtfully.

video diffusion modelsmaze solvingdenoising stepsmotion planningpath lengthobstacle densityinference scalingChEaPsequential generationreasoning capabilities
Authors
Kaleb Newman, Tyler Zhu, Olga Russakovsky
Abstract
Video diffusion models exhibit emergent reasoning capabilities like solving mazes and puzzles, yet little is understood about how they reason during generation. We take a first step towards understanding this and study the internal planning dynamics of video models using 2D maze solving as a controlled testbed. Our investigations reveal two findings. Our first finding is early plan commitment: video diffusion models commit to a high-level motion plan within the first few denoising steps, after which further denoising alters visual details but not the underlying trajectory. Our second finding is that path length, not obstacle density, is the dominant predictor of maze difficulty, with a sharp failure threshold at 12 steps. This means video models can only reason over long mazes by chaining together multiple sequential generations. To demonstrate the practical benefits of our findings, we introduce Chaining with Early Planning, or ChEaP, which only spends compute on seeds with promising early plans and chains them together to tackle complex mazes. This improves accuracy from 7% to 67% on long-horizon mazes and by 2.5x overall on hard tasks in Frozen Lake and VR-Bench across Wan2.2-14B and HunyuanVideo-1.5. Our analysis reveals that current video models possess deeper reasoning capabilities than previously recognized, which can be elicited more reliably with better inference-time scaling.