🤖 AI Summary
This work addresses the challenge that small language models (SLMs) struggle to surpass large language models (LLMs) without external reward supervision. We propose a novel distillation paradigm: simultaneous distillation of both the teacher’s output content and its implicit reward signals. Methodologically, we model the structural alignment between teacher and student responses to self-supervise pseudo-reward generation, enabling joint optimization across supervised fine-tuning (SFT) and reinforcement learning (RL) stages—marking the first reward signal distillation framework that eliminates reliance on external evaluators. Experiments demonstrate that the distilled student model significantly outperforms both the teacher model and standard SFT-based distillation baselines on GSM8K and MMLU-PRO, breaking conventional knowledge distillation performance ceilings. Key contributions include: (1) an implicit reward distillation framework; (2) empirical validation of the effectiveness and scalability of self-supervised pseudo-rewards; and (3) evidence that SLMs can surpass LLMs via reward alignment.
📝 Abstract
Distilling large language models (LLMs) typically involves transferring the teacher model's responses through supervised fine-tuning (SFT). However, this approach neglects the potential to distill both data (output content) and reward signals (quality evaluations). Extracting reliable reward signals directly from teacher models is challenging, as LLMs are optimized for generation rather than evaluation, often resulting in biased or inconsistent assessments. To address this limitation, we propose a novel distillation pipeline that transfers both responses and rewards. Our method generates pseudo-rewards through a self-supervised mechanism that leverages the inherent structure of both teacher and student responses, enabling reward learning without explicit external evaluation. The reward model subsequently guides reinforcement learning (RL), allowing iterative refinement of the student model after an SFT warm-up phase. Experiments on GSM8K and MMLU-PRO demonstrate that our method consistently outperforms traditional SFT-based approaches, enabling student models to surpass the performance of their teachers. This work highlights the potential for scalable, efficient distillation through structured self-supervised reward learning, reducing dependence on external reward supervision.