REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective

๐Ÿ“… 2025-04-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In multi-objective preference alignment, objectives such as helpfulness and harmlessness often exhibit intrinsic conflicts, making simultaneous optimization challenging. This paper proposes a data-driven mitigation paradigmโ€”first introducing the concept of *Reward Consistency* (RC) and its accompanying evaluation and sampling framework (RCS). By performing gradient-based analysis, RCS identifies and selects high-consistency samples to constrain optimization direction at the data level, without modifying underlying algorithms. The method automatically constructs multi-objective preference data and, when applied to joint harmfulness-helpfulness optimization, achieves average improvements of 13.37% in harmlessness rate and helpfulness win rate. It consistently alleviates trade-offs across diverse multi-objective scenarios. Our core contributions are: (i) establishing data selection as a novel and effective pathway for mitigating multi-objective conflicts; and (ii) revealing the implicit regularizing effect of consistent samples on gradient updates.

Technology Category

Application Category

๐Ÿ“ Abstract
Multi-objective preference alignment in language models often encounters a challenging trade-off: optimizing for one human preference (e.g., helpfulness) frequently compromises others (e.g., harmlessness) due to the inherent conflicts between competing objectives. While prior work mainly focuses on algorithmic solutions, we explore a novel data-driven approach to uncover the types of data that can effectively mitigate these conflicts. Specifically, we propose the concept of Reward Consistency (RC), which identifies samples that align with multiple preference objectives, thereby reducing conflicts during training. Through gradient-based analysis, we demonstrate that RC-compliant samples inherently constrain performance degradation during multi-objective optimization. Building on these insights, we further develop Reward Consistency Sampling, a framework that automatically constructs preference datasets that effectively mitigate conflicts during multi-objective alignment. Our generated data achieves an average improvement of 13.37% in both the harmless rate and helpfulness win rate when optimizing harmlessness and helpfulness, and can consistently resolve conflicts in varying multi-objective scenarios.
Problem

Research questions and friction points this paper is trying to address.

Resolve conflicts in multi-objective preference alignment
Identify data samples aligning multiple preference objectives
Automatically construct datasets to mitigate training conflicts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven approach to mitigate preference conflicts
Reward Consistency identifies multi-objective aligned samples
Automated dataset construction for conflict resolution
๐Ÿ”Ž Similar Papers
No similar papers found.