๐ค AI Summary
In multi-objective preference alignment, objectives such as helpfulness and harmlessness often exhibit intrinsic conflicts, making simultaneous optimization challenging. This paper proposes a data-driven mitigation paradigmโfirst introducing the concept of *Reward Consistency* (RC) and its accompanying evaluation and sampling framework (RCS). By performing gradient-based analysis, RCS identifies and selects high-consistency samples to constrain optimization direction at the data level, without modifying underlying algorithms. The method automatically constructs multi-objective preference data and, when applied to joint harmfulness-helpfulness optimization, achieves average improvements of 13.37% in harmlessness rate and helpfulness win rate. It consistently alleviates trade-offs across diverse multi-objective scenarios. Our core contributions are: (i) establishing data selection as a novel and effective pathway for mitigating multi-objective conflicts; and (ii) revealing the implicit regularizing effect of consistent samples on gradient updates.
๐ Abstract
Multi-objective preference alignment in language models often encounters a challenging trade-off: optimizing for one human preference (e.g., helpfulness) frequently compromises others (e.g., harmlessness) due to the inherent conflicts between competing objectives. While prior work mainly focuses on algorithmic solutions, we explore a novel data-driven approach to uncover the types of data that can effectively mitigate these conflicts. Specifically, we propose the concept of Reward Consistency (RC), which identifies samples that align with multiple preference objectives, thereby reducing conflicts during training. Through gradient-based analysis, we demonstrate that RC-compliant samples inherently constrain performance degradation during multi-objective optimization. Building on these insights, we further develop Reward Consistency Sampling, a framework that automatically constructs preference datasets that effectively mitigate conflicts during multi-objective alignment. Our generated data achieves an average improvement of 13.37% in both the harmless rate and helpfulness win rate when optimizing harmlessness and helpfulness, and can consistently resolve conflicts in varying multi-objective scenarios.