MoEKD: Mixture-of-Experts Knowledge Distillation for Robust and High-Performing Compressed Code Models
2026-03-13 • Software Engineering
Software Engineering
AI summaryⓘ
The authors focus on making large code-understanding models smaller and faster without losing their strength. They found that using just one big model to teach a smaller model can make it less tough against tricky problems. To fix this, they created MoEKD, which learns from many expert models instead of just one. Their tests show that this method helps smaller models work better and stay stronger against attacks. This means combining lessons from several experts can make compact models both smart and tough.
Large Language ModelsKnowledge DistillationAdversarial RobustnessMixture of ExpertsCodeBERTGraphCodeBERTVulnerability DetectionModel CompressionRouter MechanismUltra-compact Models
Authors
Md. Abdul Awal, Mrigank Rochan, Chanchal K. Roy
Abstract
Large language models for code have achieved strong performance across diverse software analytics tasks, yet their real-world adoption remains limited by high computational demands, slow inference speeds, significant energy consumption, and environmental impact. Knowledge distillation (KD) offers a practical solution by transferring knowledge from a large model to a smaller and more efficient model. Despite its effectiveness, recent studies show that models distilled from a single source often exhibit degraded adversarial robustness, even when robustness-aware distillation techniques are employed. These observations suggest a fundamental limitation of single-source distillation in simultaneously transferring high-quality and robust knowledge. To overcome this limitation, we propose Mixture of Experts Knowledge Distillation (MoEKD), a KD framework that leverages a Mixture of Experts (MoE) architecture to enable more effective and robust knowledge transfer from multiple specialized experts into a compact model. MoEKD decomposes the distillation process into expert and router training, aggregation of expert knowledge through a learned routing mechanism, and distillation from the aggregated knowledge. We evaluate MoEKD on the vulnerability detection task using CodeBERT and GraphCodeBERT models. Experimental results show that MoEKD not only improves adversarial robustness by up to 35.8%, but also enhances predictive performance by up to 13%, compared to state-of-the-art KD baselines, including Compressor and AVATAR. Furthermore, an ablation study demonstrates that aggregating expert knowledge enables ultra-compact models to maintain competitive performance even when their size is reduced by approximately half. Overall, these results highlight the effectiveness of multi-expert knowledge aggregation in addressing key limitations of existing single-source KD approaches.