Foveated Diffusion: Efficient Spatially Adaptive Image and Video Generation
2026-03-24 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors present a method to speed up generating images and videos by focusing on the part of the image where the user is looking, called the foveal region, which is seen in high detail. They reduce detail in peripheral areas, where human vision is less sharp, to use fewer computing resources without losing perceived quality. Their system creates mixed-resolution tokens, blending detailed and less detailed parts, and can update existing models to support this approach. They tested the method and found it to be efficient and visually effective according to user studies.
diffusion modelsflow matchingfoveated renderingeye trackingtokenizationimage generationvideo generationhuman visual acuityresolutionmixed-resolution tokens
Authors
Brian Chao, Lior Yariv, Howard Xiao, Gordon Wetzstein
Abstract
Diffusion and flow matching models have unlocked unprecedented capabilities for creative content creation, such as interactive image and streaming video generation. The growing demand for higher resolutions, frame rates, and context lengths, however, makes efficient generation increasingly challenging, as computational complexity grows quadratically with the number of generated tokens. Our work seeks to optimize the efficiency of the generation process in settings where the user's gaze location is known or can be estimated, for example, by using eye tracking. In these settings, we leverage the eccentricity-dependent acuity of human vision: while a user perceives very high-resolution visual information in a small region around their gaze location (the foveal region), the ability to resolve detail quickly degrades in the periphery of the visual field. Our approach starts with a mask modeling the foveated resolution to allocate tokens non-uniformly, assigning higher token density to foveal regions and lower density to peripheral regions. An image or video is generated in a mixed-resolution token setting, yielding results perceptually indistinguishable from full-resolution generation, while drastically reducing the token count and generation time. To this end, we develop a principled mechanism for constructing mixed-resolution tokens directly from high-resolution data, allowing a foveated diffusion model to be post-trained from an existing base model while maintaining content consistency across resolutions. We validate our approach through extensive analysis and a carefully designed user study, demonstrating the efficacy of foveation as a practical and scalable axis for efficient generation.