Sink-Aware Pruning for Diffusion Language Models
2026-02-19 • Computation and Language
Computation and LanguageArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors study Diffusion Language Models (DLMs), which are slow because they need many steps to generate text. They find that a common technique from other language models—keeping certain important tokens during pruning—doesn't work the same way in DLMs because these important tokens change a lot during generation. Using this insight, the authors create a new pruning method called Sink-Aware Pruning, which removes unstable important tokens without retraining. Their method improves efficiency and quality better than existing pruning techniques when given the same computational resources.
Diffusion Language ModelsPruningAttention MechanismAutoregressive ModelsInference CostDenoisingTokenLanguage ModelsEfficiencyModel Compression
Authors
Aidar Myrzakhan, Tianyi Li, Bowei Guo, Shengkun Tang, Zhiqiang Shen
Abstract
Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose ${\bf \texttt{Sink-Aware Pruning}}$, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.