Rethinking DPO: The Role of Rejected Responses in Preference Misalignment

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
DPO’s core limitation lies in the excessive gradient dominance of the rejected response within its loss function, leading to insufficient probability improvement for the chosen response and imbalanced preference alignment. To address this, we propose Bounded-DPO (BDPO), the first method that—without altering DPO’s original architecture—introduces a bounded constraint mechanism to explicitly suppress the gradient dominance of the rejected response, thereby enabling cooperative optimization between chosen and rejected responses. We provide theoretical convergence guarantees for BDPO. Empirically, BDPO consistently improves the generation probability of preferred responses by +2.1–4.7% across multiple benchmarks, while simultaneously suppressing rejected responses. It demonstrates robust performance gains over state-of-the-art methods including DPO, IPPO, and KTO.

Technology Category

Application Category

📝 Abstract
Direct Preference Optimization (DPO) is a simple and efficient framework that has attracted substantial attention. However, it often struggles to meet its primary objectives -- increasing the generation probability of chosen responses while reducing that of rejected responses -- due to the dominant influence of rejected responses on the loss function. This imbalance leads to suboptimal performance in promoting preferred responses. In this work, we systematically analyze the limitations of DPO and existing algorithms designed to achieve the objectives stated above. To address these limitations, we propose Bounded-DPO (BDPO), a novel method that bounds the influence of rejected responses while maintaining the original optimization structure of DPO. Through theoretical analysis and empirical evaluations, we demonstrate that BDPO achieves a balanced optimization of the chosen and rejected responses, outperforming existing algorithms.
Problem

Research questions and friction points this paper is trying to address.

Addresses imbalance in DPO's preference optimization
Reduces rejected responses' dominance in loss function
Proposes BDPO to balance chosen and rejected responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

BDPO bounds rejected responses' influence
Maintains DPO's original optimization structure
Balances chosen and rejected responses optimization