Efficient Sampling with Discrete Diffusion Models: Sharp and Adaptive Guarantees

2026-02-16Machine Learning

Machine LearningInformation Theory
AI summary

The authors study how to efficiently sample from discrete diffusion models, which recently have done well in practice but lack solid theory. They focus on a specific algorithm called the τ-leaping sampler and prove exact bounds on how many steps it takes to get accurate samples. For uniform diffusion, their method speeds up sampling by removing dependence on the vocabulary size, improving past results. For masking diffusion, they introduce a new sampler that automatically exploits hidden structure in data, adapting to complexity without extra tuning. Their results rely only on general assumptions about the score estimation process.

discrete diffusion modelsscore-based modelscontinuous-time Markov chainτ-leaping algorithmKL divergenceuniform noising processmasking noising processeffective total correlationscore entropy losssampling complexity
Authors
Daniil Dmitriev, Zhihan Huang, Yuting Wei
Abstract
Diffusion models over discrete spaces have recently shown striking empirical success, yet their theoretical foundations remain incomplete. In this paper, we study the sampling efficiency of score-based discrete diffusion models under a continuous-time Markov chain (CTMC) formulation, with a focus on $τ$-leaping-based samplers. We establish sharp convergence guarantees for attaining $\varepsilon$ accuracy in Kullback-Leibler (KL) divergence for both uniform and masking noising processes. For uniform discrete diffusion, we show that the $τ$-leaping algorithm achieves an iteration complexity of order $\tilde O(d/\varepsilon)$, with $d$ the ambient dimension of the target distribution, eliminating linear dependence on the vocabulary size $S$ and improving existing bounds by a factor of $d$; moreover, we establish a matching algorithmic lower bound showing that linear dependence on the ambient dimension is unavoidable in general. For masking discrete diffusion, we introduce a modified $τ$-leaping sampler whose convergence rate is governed by an intrinsic information-theoretic quantity, termed the effective total correlation, which is bounded by $d \log S$ but can be sublinear or even constant for structured data. As a consequence, the sampler provably adapts to low-dimensional structure without prior knowledge or algorithmic modification, yielding sublinear convergence rates for various practical examples (such as hidden Markov models, image data, and random graphs). Our analysis requires no boundedness or smoothness assumptions on the score estimator beyond control of the score entropy loss.