Entropy Controllable Direct Preference Optimization

📅 2024-11-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Direct Preference Optimization (DPO) employs reverse KL divergence regularization to encourage mode-seeking behavior, but its implicit entropy constraint may cause loss of certain modes in the reference distribution, degrading policy performance. Method: We propose H-DPO, which extends DPO by introducing a learnable temperature parameter to explicitly control the entropy of the policy distribution—thereby enhancing distribution sharpness and improving mode capture. Our approach modifies only the loss function, integrating binary cross-entropy with a tunable entropy regularization term, and requires no reward model, preserving training simplicity. Contribution/Results: Theoretical analysis shows that H-DPO mitigates mode collapse. Empirical evaluation on mathematical reasoning benchmarks—measured by pass@k—demonstrates significant improvements over standard DPO, confirming both effectiveness and generalization capability.

Technology Category

Application Category

📝 Abstract
In the post-training of large language models (LLMs), Reinforcement Learning from Human Feedback (RLHF) is an effective approach to achieve generation aligned with human preferences. Direct Preference Optimization (DPO) allows for policy training with a simple binary cross-entropy loss without a reward model. The objective of DPO is regularized by reverse KL divergence that encourages mode-seeking fitting to the reference policy. Nonetheless, we indicate that minimizing reverse KL divergence could fail to capture a mode of the reference distribution, which may hurt the policy's performance. Based on this observation, we propose a simple modification to DPO, H-DPO, which allows for control over the entropy of the resulting policy, enhancing the distribution's sharpness and thereby enabling mode-seeking fitting more effectively. In our experiments, we show that H-DPO outperformed DPO across various tasks, demonstrating superior results in pass@$k$ evaluations for mathematical tasks. Moreover, H-DPO is simple to implement, requiring only minor modifications to the loss calculation of DPO, which makes it highly practical and promising for wide-ranging applications in the training of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Improves mode-seeking in DPO for better policy performance
Controls policy entropy to enhance distribution sharpness
Simplifies implementation with minor DPO loss modifications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy controllable DPO modification
Enhanced mode-seeking fitting via entropy
Simple loss calculation modification
🔎 Similar Papers
No similar papers found.
M
Motoki Omura
The University of Tokyo
Yasuhiro Fujita
Yasuhiro Fujita
Preferred Networks, Inc.
reinforcement learningmachine learning
T
Toshiki Kataoka
Preferred Networks, Inc.