🤖 AI Summary
Existing reward model (RM) evaluation benchmarks exhibit weak correlation with downstream policy optimization performance, failing to reflect true RM capabilities. Method: This paper reconceptualizes RM evaluation from the perspective of reward over-optimization, treating over-optimization degree as a diagnostic tool—not an optimization objective—for the first time. We propose three principled criteria for benchmark construction: (i) controllable response-pair divergence, (ii) multi-source, multi-turn preference comparisons, and (iii) cross-model response sampling. Our method integrates RLHF principles with multi-model response sampling, pairwise preference ranking, and quantitative over-optimization analysis. Contribution/Results: Experiments demonstrate that the new benchmark significantly improves the correlation between RM evaluation scores and downstream policy performance. Validation across multiple mainstream RM benchmarks confirms its effectiveness; notably, moderate over-optimization metrics yield superior predictive power for downstream task performance compared to extreme or minimal over-optimization indicators.
📝 Abstract
Reward models (RMs) play a crucial role in reinforcement learning from human feedback (RLHF), aligning model behavior with human preferences. However, existing benchmarks for reward models show a weak correlation with the performance of optimized policies, suggesting that they fail to accurately assess the true capabilities of RMs. To bridge this gap, we explore several evaluation designs through the lens of reward overoptimization extemdash a phenomenon that captures both how well the reward model aligns with human preferences and the dynamics of the learning signal it provides to the policy. The results highlight three key findings on how to construct a reliable benchmark: (i) it is important to minimize differences between chosen and rejected responses beyond correctness, (ii) evaluating reward models requires multiple comparisons across a wide range of chosen and rejected responses, and (iii) given that reward models encounter responses with diverse representations, responses should be sourced from a variety of models. However, we also observe that a extremely high correlation with degree of overoptimization leads to comparatively lower correlation with certain downstream performance. Thus, when designing a benchmark, it is desirable to use the degree of overoptimization as a useful tool, rather than the end goal.