🤖 AI Summary
Self-rewarding large language models (LLMs) suffer from preference data bias and performance degradation in iterative alignment due to reward inconsistency across rounds. This work presents the first formal analysis of iterative self-rewarding frameworks and introduces cross-iteration reward consistency regularization—a KL-divergence-based constraint that mitigates overconfident reward labeling and cumulative bias. Our method integrates LLM-as-a-judge scoring, direct preference optimization (DPO), and the proposed consistency regularization. Evaluated on multiple benchmarks, it significantly improves reward model reliability (Kendall’s τ +12.3%) and alignment quality (win rate +8.7%), effectively resolving iterative degradation in 7B-scale models. The core innovation lies in formulating reward consistency as an optimizable regularizer, enabling more robust self-supervised alignment without external human feedback.
📝 Abstract
Recent self-rewarding large language models (LLM) have successfully applied LLM-as-a-Judge to iteratively improve the alignment performance without the need of human annotations for preference data. These methods commonly utilize the same LLM to act as both the policy model (which generates responses) and the reward model (which scores and ranks those responses). The ranked responses are then used as preference pairs to train the LLM via direct alignment technologies (e.g. DPO). However, it is noteworthy that throughout this process, there is no guarantee of accuracy in the rewarding and ranking, which is critical for ensuring accurate rewards and high-quality preference data. Empirical results from relatively small LLMs (e.g., 7B parameters) also indicate that improvements from self-rewarding may diminish after several iterations in certain situations, which we hypothesize is due to accumulated bias in the reward system. This bias can lead to unreliable preference data for training the LLM. To address this issue, we first formulate and analyze the generalized iterative preference fine-tuning framework for self-rewarding language model. We then introduce the regularization to this generalized framework to mitigate the overconfident preference labeling in the self-rewarding process. Based on this theoretical insight, we propose a Consistency Regularized sElf-rewarding lAnguage Model (CREAM) that leverages the consistency of rewards across different iterations to regularize the self-rewarding training, helping the model to learn from more reliable preference data. With this explicit regularization, our empirical results demonstrate the superiority of CREAM in improving both reward consistency and alignment performance. The code is publicly available at https://github.com/Raibows/CREAM.