HiAR: Efficient Autoregressive Long Video Generation via Hierarchical Denoising
2026-03-09 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors study how to generate very long videos without losing quality over time. They found that using less perfectly cleaned-up video chunks as context helps keep the video consistent without making errors pile up. Their method, called HiAR, creates the video by gradually refining all parts together instead of one-by-one, which also makes the process faster. To keep the motion in videos realistic, they added a special regularizer that avoids the model getting stuck on simple, low-motion outcomes. Tests showed HiAR produces better videos with less quality loss over time compared to other methods.
autoregressive diffusiontemporal continuityerror accumulationbidirectional diffusiondenoisingcausal generationself-rollout distillationreverse-KL objectiveforward-KL regularizervideo generation
Authors
Kai Zou, Dian Zheng, Hongbo Liu, Tiankai Hang, Bin Liu, Nenghai Yu
Abstract
Autoregressive (AR) diffusion offers a promising framework for generating videos of theoretically infinite length. However, a major challenge is maintaining temporal continuity while preventing the progressive quality degradation caused by error accumulation. To ensure continuity, existing methods typically condition on highly denoised contexts; yet, this practice propagates prediction errors with high certainty, thereby exacerbating degradation. In this paper, we argue that a highly clean context is unnecessary. Drawing inspiration from bidirectional diffusion models, which denoise frames at a shared noise level while maintaining coherence, we propose that conditioning on context at the same noise level as the current block provides sufficient signal for temporal consistency while effectively mitigating error propagation. Building on this insight, we propose HiAR, a hierarchical denoising framework that reverses the conventional generation order: instead of completing each block sequentially, it performs causal generation across all blocks at every denoising step, so that each block is always conditioned on context at the same noise level. This hierarchy naturally admits pipelined parallel inference, yielding a 1.8 wall-clock speedup in our 4-step setting. We further observe that self-rollout distillation under this paradigm amplifies a low-motion shortcut inherent to the mode-seeking reverse-KL objective. To counteract this, we introduce a forward-KL regulariser in bidirectional-attention mode, which preserves motion diversity for causal inference without interfering with the distillation loss. On VBench (20s generation), HiAR achieves the best overall score and the lowest temporal drift among all compared methods.