T3D: Few-Step Diffusion Language Models via Trajectory Self-Distillation with Direct Discriminative Optimization
2026-02-12 • Computation and Language
Computation and LanguageMachine Learning
AI summaryⓘ
The authors study diffusion large language models (DLLMs), which can speed up text generation by making multiple guesses at once. However, these models usually need many improvement steps to create good text, and cutting these steps hurts quality. They propose a new method called trajectory self-distillation, where the model learns from its own past outputs to get better with fewer steps. Their approach improves performance in limited-step settings, though running all steps still works best.
Diffusion ModelsLarge Language ModelsText GenerationSelf-DistillationDirect Discriminative OptimizationReverse KL DivergenceParallel DecodingFew-Step DecodingModel DistillationInference Efficiency
Authors
Tunyu Zhang, Xinxi Zhang, Ligong Han, Haizhou Shi, Xiaoxiao He, Zhuowei Li, Hao Wang, Kai Xu, Akash Srivastava, Hao Wang, Vladimir Pavlovic, Dimitris N. Metaxas
Abstract
Diffusion large language models (DLLMs) have the potential to enable fast text generation by decoding multiple tokens in parallel. However, in practice, their inference efficiency is constrained by the need for many refinement steps, while aggressively reducing the number of steps leads to a substantial degradation in generation quality. To alleviate this, we propose a trajectory self-distillation framework that improves few-step decoding by distilling the model's own generative trajectories. We incorporate Direct Discriminative Optimization (DDO), a reverse-KL objective that promotes mode-seeking distillation and encourages the student to concentrate on high-probability teacher modes. Across benchmarks, our approach consistently outperforms strong few-step baselines and standard training under tight step budgets. Although full-step decoding remains superior, we substantially narrow the gap, establishing a strong foundation towards practical few-step DLLMs. The source code is available at https://github.com/Tyrion58/T3D.