🤖 AI Summary
While Direct Preference Optimization (DPO) offers training stability, it suffers from overfitting and model collapse. To address these issues, we propose Linear Preference Optimization (LPO), a novel preference alignment method that replaces the log-likelihood loss with an absolute difference loss, thereby decoupling gradient updates for preferred and dispreferred responses within each preference pair. LPO further introduces an offset constraint and a quality-preserving regularization term, enabling linearly controllable reduction of rejection probability. This design fundamentally mitigates gradient conflict and optimization imbalance, significantly enhancing training stability and robustness. Empirically, LPO consistently outperforms DPO across diverse tasks—including general text generation, mathematical reasoning, and speech synthesis—demonstrating strong generalization capability. All code, models, and datasets are publicly released.
📝 Abstract
DPO (Direct Preference Optimization) has become a widely used offline preference optimization algorithm due to its simplicity and training stability. However, DPO is prone to overfitting and collapse. To address these challenges, we propose Linear Preference Optimization (LPO), a novel alignment framework featuring three key innovations. First, we introduce gradient decoupling by replacing the log-sigmoid function with an absolute difference loss, thereby isolating the optimization dynamics. Second, we improve stability through an offset constraint combined with a positive regularization term to preserve the chosen response quality. Third, we implement controllable rejection suppression using gradient separation with straightforward estimation and a tunable coefficient that linearly regulates the descent of the rejection probability. Through extensive experiments, we demonstrate that LPO consistently improves performance on various tasks, including general text tasks, math tasks, and text-to-speech (TTS) tasks. These results establish LPO as a robust and tunable paradigm for preference alignment, and we release the source code, models, and training data publicly.