The Diffusion Duality, Chapter II: $Ψ$-Samplers and Efficient Curriculum

2026-02-24Machine Learning

Machine Learning
AI summary

The authors show that a new type of sampler called Predictor-Corrector (PC) improves the quality of discrete diffusion models used for generating language and images, especially when using many sampling steps. Their PC samplers work better than previous methods by continuing to improve as sampling steps increase, unlike traditional approaches that plateau. They also introduce a memory-saving training method that makes the model faster and more efficient without losing performance. Overall, their work suggests that uniform-state diffusion with PC samplers is a strong alternative to Masked diffusion models for language tasks.

discrete diffusion modelspredictor-corrector samplersancestral samplinguniform-state diffusionlanguage modelingimage generationGaussian relaxationperplexityFID scorecurriculum learning
Authors
Justin Deschenaux, Caglar Gulcehre, Subham Sekhar Sahoo
Abstract
Uniform-state discrete diffusion models excel at few-step generation and guidance due to their ability to self-correct, making them preferred over autoregressive or Masked diffusion models in these settings. However, their sampling quality plateaus with ancestral samplers as the number of steps increases. We introduce a family of Predictor-Corrector (PC) samplers for discrete diffusion that generalize prior methods and apply to arbitrary noise processes. When paired with uniform-state diffusion, our samplers outperform ancestral sampling on both language and image modeling, achieving lower generative perplexity at matched unigram entropy on OpenWebText and better FID/IS scores on CIFAR10. Crucially, unlike conventional samplers, our PC methods continue to improve with more sampling steps. Taken together, these findings call into question the assumption that Masked diffusion is the inevitable future of diffusion-based language modeling. Beyond sampling, we develop a memory-efficient curriculum for the Gaussian relaxation training phase, reducing training time by 25% and memory by 33% compared to Duo while maintaining comparable perplexity on OpenWebText and LM1B and strong downstream performance. We release code, checkpoints, and a video-tutorial on: https://s-sahoo.com/duo-ch2