Disentangling Length Bias In Preference Learning Via Response-Conditioned Modeling

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In RLHF, reward models suffer from response-length bias, leading to distorted preference modeling and failure in following length instructions. To address this, we propose a response-conditional modeling paradigm. Our key contributions are: (i) the Response-conditional Bradley-Terry (Rc-BT) model, the first to explicitly decouple semantic preference from length requirements; and (ii) Rc-DPO, a novel variant of DPO that incorporates length information into the objective function, enabling end-to-end suppression of length bias while jointly optimizing for instruction adherence. Evaluated on an人工-augmented, length-controllable preference dataset across multiple base LLMs (e.g., Llama, Qwen) and standard preference benchmarks, Rc-DPO achieves significant improvements—+4.2%–7.8% in preference accuracy and +12.3%–19.6% in length-instruction compliance—without requiring additional length annotations, demonstrating strong generalization.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) has achieved considerable success in aligning large language models (LLMs) by modeling human preferences with a learnable reward model and employing a reinforcement learning algorithm to maximize the reward model's scores. However, these reward models are susceptible to exploitation through various superficial confounding factors, with length bias emerging as a particularly significant concern. Moreover, while the pronounced impact of length bias on preference modeling suggests that LLMs possess an inherent sensitivity to length perception, our preliminary investigations reveal that fine-tuned LLMs consistently struggle to adhere to explicit length instructions. To address these two limitations, we propose a novel framework wherein the reward model explicitly differentiates between human semantic preferences and response length requirements. Specifically, we introduce a Response-conditioned Bradley-Terry (Rc-BT) model that enhances the reward model's capability in length bias mitigating and length instruction following, through training on our augmented dataset. Furthermore, we propose the Rc-DPO algorithm to leverage the Rc-BT model for direct policy optimization (DPO) of LLMs, simultaneously mitigating length bias and promoting adherence to length instructions. Extensive evaluations demonstrate that our approach substantially improves both preference modeling and length instruction compliance, with its effectiveness validated across various foundational models and preference datasets.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning with Human Feedback
Large Language Models
Length Bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rc-BT Model
Rc-DPO Algorithm
Prefence Modeling
🔎 Similar Papers