🤖 AI Summary
This work proposes an efficient prefix distillation method to address the high computational cost of traditional on-policy distillation (OPD), which requires sampling full student policy sequences and becomes particularly inefficient when generating long outputs. Based on the key observation that distillation signals are primarily concentrated in the output prefixes, the proposed approach applies distillation loss only to the student-generated prefixes and terminates sampling early. This strategy substantially reduces computational overhead while preserving performance comparable to full OPD. Empirical evaluations on AI-for-Math and out-of-domain benchmarks demonstrate that the method achieves a 2× to 47× reduction in training FLOPs without sacrificing model effectiveness, significantly enhancing training efficiency.
📝 Abstract
On-policy distillation (OPD), which samples trajectories from the student model and supervises them with a teacher at the token level, avoids relying solely on verifiable terminal rewards and can yield better generalization than off-policy distillation. However, OPD requires expensive on-the-fly sampling of the student policy during training, which substantially increases training cost, especially for long responses. Our initial analysis shows that, during OPD, training signals are often concentrated in the prefix of each output, and that even a short teacher-generated prefix can significantly help the student produce the correct answer. Motivated by these observations, we propose a simple yet effective modification of OPD: we apply the distillation objective only to prefixes of student-generated outputs and terminate each sampling early during distillation. Experiments on a suite of AI-for-Math and out-of-domain benchmarks show that on-policy prefix distillation matches the performance of full OPD while reducing training FLOP by 2x-47x.