π€ AI Summary
This work addresses the limitations of conventional policy distillation based on reverse KL divergence, which often suffers from reduced generation diversity and unstable learning signals when the teacher model exhibits high output entropy. To mitigate this, the authors propose an entropy-aware adaptive policy distillation method that dynamically switches between reverse and forward KL divergences based on the entropy of the teacherβs output distribution: forward KL is employed under high entropy to enhance mode coverage, while reverse KL is retained under low entropy to ensure precise imitation. The approach integrates a hybrid objective with an adaptive weighting mechanism, improving knowledge transfer without compromising training efficiency. Experiments demonstrate consistent gains across six mathematical reasoning benchmarks, with Pass@8 accuracy improvements of +1.37, +2.39, and +5.05 on Qwen3-series models, alongside stable token-level entropy throughout training.
π Abstract
On-policy distillation is a promising approach for transferring knowledge between language models, where a student learns from dense token-level signals along its own trajectories. This framework typically uses reverse KL divergence, encouraging the student to match the teacher's high-confidence predictions. However, we show that the mode-seeking property of reverse KL reduces generation diversity and yields unstable learning signals when the teacher distribution has high entropy. To address this, we introduce Entropy-Aware On-Policy Distillation. Our key idea is augmenting the standard reverse KL objective with forward KL when teacher entropy is high, capturing the full range of plausible outputs while retaining precise imitation elsewhere. It balances mode-seeking precision with mode-covering robustness without sacrificing on-policy training efficiency. Experiments show that our method maintains generation diversity (sustained token-level entropy) and improves student-teacher alignment (lower forward KL on high-entropy tokens). Across six math reasoning benchmarks, this yields Pass@8 accuracy gains of +1.37 for Qwen3-0.6B-Base, +2.39 for Qwen3-1.7B-Base, and +5.05 for Qwen3-4B-Base compared to baseline on-policy distillation methods. These results demonstrate that accounting for teacher uncertainty is essential for maintaining diversity and achieving effective knowledge transfer.