🤖 AI Summary
DPO in language model post-training suffers from implicit reward overfitting and divergence, leading to policy degradation—where even preferred responses approach zero probability. This paper identifies the root cause as implicit reward over-adaptation to preference data. To address this, we propose Reward-Distilled DPO (RD-DPO), the first DPO variant integrating explicit reward model distillation: it jointly optimizes a family of reward models to calibrate the language model’s implicit reward distribution. RD-DPO unifies implicit reward modeling, reward knowledge distillation, and multi-model ensembling, preserving DPO’s inference efficiency and simplicity without added computational overhead. Experiments demonstrate that RD-DPO significantly mitigates policy degradation and enhances alignment stability and generalization robustness under distributional shift.
📝 Abstract
Language model (LM) post-training (or alignment) involves maximizing a reward function that is derived from preference annotations. Direct Preference Optimization (DPO) is a popular offline alignment method that trains a policy directly on preference data without the need to train a reward model or apply reinforcement learning. However, the empirical evidence suggests that DPO typically assigns implicit rewards that overfit, and trend towards infinite magnitude. This frequently leads to degenerate policies, sometimes causing even the probabilities of the preferred generations to go to zero. In this work, we analyze this phenomenon and use distillation to get a better proxy for the true preference distribution over generation pairs: we train the LM such that its induced implicit reward, i.e., the scaled log-likelihood ratio of the model to the reference model, matches an explicit reward model trained on the preference data. Moreover, to account for uncertainty in the reward model we are distilling from, we optimize against a family of reward models that, as a whole, is likely to include at least one reasonable proxy for the preference distribution. Our results show that distilling from such a family of reward models leads to improved robustness to distribution shift in preference annotations, while preserving the simple supervised nature of DPO.