Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models

2026-04-29Computation and Language

Computation and LanguageArtificial IntelligenceMachine Learning
AI summary

The authors introduce TIDE, a new method to help smaller diffusion large language models (dLLMs) learn from much bigger ones, even when their designs are very different. Their approach combines three techniques to better handle noise, improve understanding of masked text, and deal with different tokenizers between teacher and student models. Using TIDE, the authors successfully trained a much smaller model that performs better than standard methods on various tests, especially in code generation tasks. This shows their framework can effectively transfer knowledge across different model types.

diffusion large language modelsmodel distillationcross-architecture transfertokenizernoise-dependent reliabilitymaskingcode generationmixture of expertsbidirectional context
Authors
Gongbo Zhang, Wen Wang, Ye Tian, Li Yuan
Abstract
Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance. While existing distillation methods for dLLMs reduce inference steps within a single architecture, none address cross-architecture knowledge transfer, in which the teacher and student differ in architecture, attention mechanism, and tokenizer. We present TIDE, the first framework for cross-architecture dLLM distillation, comprising three modular components: (1) TIDAL, which jointly modulates distillation strength across training progress and diffusion timestep to account for the teacher's noise-dependent reliability; (2) CompDemo, which enriches the teacher's context via complementary mask splitting to improve predictions under heavy masking; and (3) Reverse CALM, a cross-tokenizer objective that inverts chunk-level likelihood matching, yielding bounded gradients and dual-end noise filtering. Distilling 8B dense and 16B MoE teachers into a 0.6B student via two heterogeneous pipelines outperforms the baseline by an average of 1.53 points across eight benchmarks, yielding notable gains in code generation, where HumanEval scores reach 48.78 compared to 32.3 for the AR baseline.