TIP: Token Importance in On-Policy Distillation

2026-04-15Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors study which parts of the student model's outputs are most helpful during on-policy knowledge distillation (OPD), where a student learns from its own generated data with guidance from a teacher. They find that important learning happens at tokens where the student is uncertain (high entropy) and also where it is wrongly confident (low entropy but high disagreement with the teacher). Their method, TIP, categorizes tokens by uncertainty and disagreement to pick the most informative ones, significantly reducing computation without losing accuracy. They show these results across different models and tasks, demonstrating memory-efficient training with fewer tokens.

On-policy knowledge distillationStudent entropyTeacher-student divergenceToken importanceEntropy-based samplingModel distillationToken-level supervisionUncertainty quantificationMemory-efficient training
Authors
Yuanda Xu, Hejian Sang, Zhengze Zhou, Ran He, Zhipeng Wang, Alborz Geramifard
Abstract
On-policy knowledge distillation (OPD) trains a student on its own rollouts under token-level supervision from a teacher. Not all token positions matter equally, but existing views of token importance are incomplete. We ask a direct question: which tokens carry the most useful learning signal in OPD? Our answer is that informative tokens come from two regions: positions with high student entropy, and positions with low student entropy plus high teacher--student divergence, where the student is overconfident and wrong. Empirically, student entropy is a strong first-order proxy: retaining $50\%$ of tokens with entropy-based sampling matches or exceeds all-token training while reducing peak memory by up to $47\%$. But entropy alone misses a second important region. When we isolate low-entropy, high-divergence tokens, training on fewer than $10\%$ of all tokens nearly matches full-token baselines, showing that overconfident tokens carry dense corrective signal despite being nearly invisible to entropy-only rules. We organize these findings with TIP (Token Importance in on-Policy distillation), a two-axis taxonomy over student entropy and teacher--student divergence, and give a theoretical explanation for why entropy is useful yet structurally incomplete. This view motivates type-aware token selection rules that combine uncertainty and disagreement. We validate this picture across three teacher--student pairs spanning Qwen3, Llama, and Qwen2.5 on MATH-500 and AIME 2024/2025, and on the DeepPlanning benchmark for long-horizon agentic planning, where Q3-only training on $<$$20\%$ of tokens surpasses full-token OPD. Our experiments are implemented by extending the OPD repository https://github.com/HJSang/OPSD_OnPolicyDistillation, which supports memory-efficient distillation of larger models under limited GPU budgets.