Fast and Effective On-policy Distillation from Reasoning Prefixes

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an efficient prefix distillation method to address the high computational cost of traditional on-policy distillation (OPD), which requires sampling full student policy sequences and becomes particularly inefficient when generating long outputs. Based on the key observation that distillation signals are primarily concentrated in the output prefixes, the proposed approach applies distillation loss only to the student-generated prefixes and terminates sampling early. This strategy substantially reduces computational overhead while preserving performance comparable to full OPD. Empirical evaluations on AI-for-Math and out-of-domain benchmarks demonstrate that the method achieves a 2× to 47× reduction in training FLOPs without sacrificing model effectiveness, significantly enhancing training efficiency.

Technology Category

Application Category

📝 Abstract
On-policy distillation (OPD), which samples trajectories from the student model and supervises them with a teacher at the token level, avoids relying solely on verifiable terminal rewards and can yield better generalization than off-policy distillation. However, OPD requires expensive on-the-fly sampling of the student policy during training, which substantially increases training cost, especially for long responses. Our initial analysis shows that, during OPD, training signals are often concentrated in the prefix of each output, and that even a short teacher-generated prefix can significantly help the student produce the correct answer. Motivated by these observations, we propose a simple yet effective modification of OPD: we apply the distillation objective only to prefixes of student-generated outputs and terminate each sampling early during distillation. Experiments on a suite of AI-for-Math and out-of-domain benchmarks show that on-policy prefix distillation matches the performance of full OPD while reducing training FLOP by 2x-47x.
Problem

Research questions and friction points this paper is trying to address.

on-policy distillation
training cost
student policy sampling
token-level supervision
trajectory sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

on-policy distillation
prefix distillation
token-level supervision
training efficiency
reasoning prefixes
🔎 Similar Papers
No similar papers found.