Generative Refinement Networks for Visual Synthesis

2026-04-14Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors introduce Generative Refinement Networks (GRN) to improve how computers create images. Unlike current popular methods that spend the same effort on all parts of an image, GRN adjusts its effort based on image complexity, similar to how a human artist gradually refines a painting. They use a new technique called Hierarchical Binary Quantization (HBQ) to keep image details almost perfectly during processing. Their method performs very well on standard tests for image and video creation, and they have shared their models and code for others to use.

diffusion modelsautoregressive modelsHierarchical Binary QuantizationGenerative Refinement Networksimage reconstructionentropy-guided samplingclass-conditional image generationtext-to-image generationtext-to-video generationimage synthesis
Authors
Jian Han, Jinlai Liu, Jiahuan Wang, Bingyue Peng, Zehuan Yuan
Abstract
While diffusion models dominate the field of visual generation, they are computationally inefficient, applying a uniform computational effort regardless of different complexity. In contrast, autoregressive (AR) models are inherently complexity-aware, as evidenced by their variable likelihoods, but are often hindered by lossy discrete tokenization and error accumulation. In this work, we introduce Generative Refinement Networks (GRN), a next-generation visual synthesis paradigm to address these issues. At its core, GRN addresses the discrete tokenization bottleneck through a theoretically near-lossless Hierarchical Binary Quantization (HBQ), achieving a reconstruction quality comparable to continuous counterparts. Built upon HBQ's latent space, GRN fundamentally upgrades AR generation with a global refinement mechanism that progressively perfects and corrects artworks -- like a human artist painting. Besides, GRN integrates an entropy-guided sampling strategy, enabling complexity-aware, adaptive-step generation without compromising visual quality. On the ImageNet benchmark, GRN establishes new records in image reconstruction (0.56 rFID) and class-conditional image generation (1.81 gFID). We also scale GRN to more challenging text-to-image and text-to-video generation, delivering superior performance on an equivalent scale. We release all models and code to foster further research on GRN.