Reward Models Can Improve Themselves: Reward-Guided Adversarial Failure Mode Discovery for Robust Reward Modeling

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reward models (RMs) exhibit poor robustness under distributional shift or adversarial perturbations, and existing failure detection methods rely on prior knowledge of preference distributions or failure modes, limiting practical applicability. Method: We propose a prior-free self-amplification framework: leveraging reward-guided controlled decoding to autonomously generate misclassified adversarial examples, enabling automatic discovery of failure modes and data augmentation; combined with iterative retraining to rectify the RM and eliminate spurious correlations. Contribution/Results: This is the first work to enable RMs to perform self-diagnosis and self-optimization. Evaluated on Anthropic HH and PKU Beavertails, our method significantly improves RM robustness against adversarial and out-of-distribution inputs, while preserving—or even enhancing—reward quality and alignment performance. Consequently, downstream reinforcement learning from human feedback (RLHF) training becomes more stable and reliable.

Technology Category

Application Category

📝 Abstract
Reward modeling (RM), which captures human preferences to align large language models (LLMs), is increasingly employed in tasks such as model finetuning, response filtering, and ranking. However, due to the inherent complexity of human preferences and the limited coverage of available datasets, reward models often fail under distributional shifts or adversarial perturbations. Existing approaches for identifying such failure modes typically rely on prior knowledge about preference distributions or failure attributes, limiting their practicality in real-world settings where such information is unavailable. In this work, we propose a tractable, preference-distribution agnostic method for discovering reward model failure modes via reward guided controlled decoding. Building on this, we introduce REFORM, a self-improving reward modeling framework that enhances robustness by using the reward model itself to guide the generation of falsely scored responses. These adversarial examples are then used to augment the training data and patch the reward model's misaligned behavior. We evaluate REFORM on two widely used preference datasets Anthropic Helpful Harmless (HH) and PKU Beavertails and demonstrate that it significantly improves robustness without sacrificing reward quality. Notably, REFORM preserves performance both in direct evaluation and in downstream policy training, and further improves alignment quality by removing spurious correlations.
Problem

Research questions and friction points this paper is trying to address.

Reward models fail under distribution shifts or adversarial perturbations
Existing methods need prior knowledge, limiting real-world practicality
Propose self-improving framework REFORM to enhance reward model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reward-guided adversarial failure mode discovery
Self-improving reward modeling framework REFORM
Augments training data with adversarial examples
🔎 Similar Papers
No similar papers found.