IPO: Your Language Model is Secretly a Preference Classifier

📅 2025-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational and annotation costs associated with human preference alignment of large language models (LLMs)—typically incurred by reliance on manual labeling or external reward models—this paper proposes Implicit Preference Optimization (IPO). IPO is the first systematic framework to empirically validate and leverage LLMs’ intrinsic ability to discriminate preferences, enabling end-to-end alignment without external reward models. It operates via multi-response generation, self-supervised preference scoring, and a DPO variant for training. On RewardBench, IPO achieves alignment performance competitive with state-of-the-art reward-model-based methods. The core contribution lies in uncovering and harnessing LLMs’ implicit preference modeling capability, thereby substantially reducing dependence on human feedback and auxiliary models. This establishes a novel, efficient, and low-cost paradigm for preference alignment.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) has emerged as the primary method for aligning large language models (LLMs) with human preferences. While it enables LLMs to achieve human-level alignment, it often incurs significant computational and financial costs due to its reliance on training external reward models or human-labeled preferences. In this work, we propose extbf{Implicit Preference Optimization (IPO)}, an alternative approach that leverages generative LLMs as preference classifiers, thereby reducing the dependence on external human feedback or reward models to obtain preferences. We conduct a comprehensive evaluation on the preference classification ability of LLMs using RewardBench, assessing models across different sizes, architectures, and training levels to validate our hypothesis. Furthermore, we investigate the self-improvement capabilities of LLMs by generating multiple responses for a given instruction and employing the model itself as a preference classifier for Direct Preference Optimization (DPO)-based training. Our findings demonstrate that models trained through IPO achieve performance comparable to those utilizing state-of-the-art reward models for obtaining preferences.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational costs in alignment
Leverage LLMs as preference classifiers
Validate self-improvement capabilities in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs as preference classifiers
Reduces external human feedback
Self-improvement via DPO training
🔎 Similar Papers
No similar papers found.