End-to-End Training for Unified Tokenization and Latent Denoising
2026-03-23 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial IntelligenceGraphicsMachine Learning
AI summaryⓘ
The authors propose UNITE, a new method that combines two steps—turning images into tokens and generating images from tokens—into one single process. Unlike previous approaches that trained these steps separately, UNITE uses a shared model called the Generative Encoder for both tasks simultaneously. This shared training helps the model learn a common way to represent data, working well for images and molecules. Their results show this simpler method can achieve nearly top performance without needing extra tools or complicated setups.
latent diffusion modelstokenizationautoencoderlatent spacegenerative encodersingle-stage trainingImageNetFID scorerepresentation learningcompression
Authors
Shivam Duggal, Xingjian Bai, Zongze Wu, Richard Zhang, Eli Shechtman, Antonio Torralba, Phillip Isola, William T. Freeman
Abstract
Latent diffusion models (LDMs) enable high-fidelity synthesis by operating in learned latent spaces. However, training state-of-the-art LDMs requires complex staging: a tokenizer must be trained first, before the diffusion model can be trained in the frozen latent space. We propose UNITE - an autoencoder architecture for unified tokenization and latent diffusion. UNITE consists of a Generative Encoder that serves as both image tokenizer and latent generator via weight sharing. Our key insight is that tokenization and generation can be viewed as the same latent inference problem under different conditioning regimes: tokenization infers latents from fully observed images, whereas generation infers them from noise together with text or class conditioning. Motivated by this, we introduce a single-stage training procedure that jointly optimizes both tasks via two forward passes through the same Generative Encoder. The shared parameters enable gradients to jointly shape the latent space, encouraging a "common latent language". Across image and molecule modalities, UNITE achieves near state of the art performance without adversarial losses or pretrained encoders (e.g., DINO), reaching FID 2.12 and 1.73 for Base and Large models on ImageNet 256 x 256. We further analyze the Generative Encoder through the lenses of representation alignment and compression. These results show that single stage joint training of tokenization & generation from scratch is feasible.