EVATok: Adaptive Length Video Tokenization for Efficient Visual Autoregressive Generation

2026-03-12Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors propose EVATok, a new method that improves how videos are turned into token sequences for autoregressive video models. Instead of using the same number of tokens for all parts of a video, EVATok adjusts token use based on the video’s complexity, using fewer tokens for simple parts and more for detailed ones. This approach leads to better video reconstruction quality while using fewer computational resources. They also use special routers to quickly decide the best token assignments. Their experiments show EVATok works better and more efficiently than previous methods on video generation tasks.

autoregressive video modelsvideo tokenizerstoken sequencesvideo reconstructionadaptive token assignmentrouters in machine learningUCF-101 datasetclass-to-video generationLARP methodsemantic encoders
Authors
Tianwei Xiong, Jun Hao Liew, Zilong Huang, Zhijie Lin, Jiashi Feng, Xihui Liu
Abstract
Autoregressive (AR) video generative models rely on video tokenizers that compress pixels into discrete token sequences. The length of these token sequences is crucial for balancing reconstruction quality against downstream generation computational cost. Traditional video tokenizers apply a uniform token assignment across temporal blocks of different videos, often wasting tokens on simple, static, or repetitive segments while underserving dynamic or complex ones. To address this inefficiency, we introduce $\textbf{EVATok}$, a framework to produce $\textbf{E}$fficient $\textbf{V}$ideo $\textbf{A}$daptive $\textbf{Tok}$enizers. Our framework estimates optimal token assignments for each video to achieve the best quality-cost trade-off, develops lightweight routers for fast prediction of these optimal assignments, and trains adaptive tokenizers that encode videos based on the assignments predicted by routers. We demonstrate that EVATok delivers substantial improvements in efficiency and overall quality for video reconstruction and downstream AR generation. Enhanced by our advanced training recipe that integrates video semantic encoders, EVATok achieves superior reconstruction and state-of-the-art class-to-video generation on UCF-101, with at least 24.4% savings in average token usage compared to the prior state-of-the-art LARP and our fixed-length baseline.