ConfPO: Exploiting Policy Model Confidence for Critical Token Selection in Large Language Model Preference Optimization

πŸ“… 2025-06-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address over-optimization (reward hacking) and low alignment efficiency caused by uniform global token-level adjustments in large language model preference optimization, this paper proposes Confidence-DPOβ€”a novel paradigm that identifies and optimizes only preference-critical tokens using the policy model’s intrinsic token-level confidence scores. Without auxiliary models or additional computational overhead, Confidence-DPO dynamically reweights the preference loss at the token level via confidence-based assessment, enabling credit-free, fine-grained critical-token selection and optimization within a pure supervised framework. Evaluated on AlpacaEval 2 and Arena-Hard, it significantly outperforms DPO and other baselines across multiple LLMs, improving both alignment quality and KL-divergence budget utilization efficiency. To our knowledge, this is the first method achieving zero-overhead, high-precision, token-level preference alignment.

Technology Category

Application Category

πŸ“ Abstract
We introduce ConfPO, a method for preference learning in Large Language Models (LLMs) that identifies and optimizes preference-critical tokens based solely on the training policy's confidence, without requiring any auxiliary models or compute. Unlike prior Direct Alignment Algorithms (DAAs) such as Direct Preference Optimization (DPO), which uniformly adjust all token probabilities regardless of their relevance to preference, ConfPO focuses optimization on the most impactful tokens. This targeted approach improves alignment quality while mitigating overoptimization (i.e., reward hacking) by using the KL divergence budget more efficiently. In contrast to recent token-level methods that rely on credit-assignment models or AI annotators, raising concerns about scalability and reliability, ConfPO is simple, lightweight, and model-free. Experimental results on challenging alignment benchmarks, including AlpacaEval 2 and Arena-Hard, demonstrate that ConfPO consistently outperforms uniform DAAs across various LLMs, delivering better alignment with zero additional computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Optimizes preference-critical tokens using policy confidence
Avoids overoptimization by focusing on impactful tokens
Improves alignment quality without extra computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses policy confidence for critical token selection
Targets impactful tokens to optimize alignment
Eliminates need for auxiliary models or compute
πŸ”Ž Similar Papers
No similar papers found.
Hee Suk Yoon
Hee Suk Yoon
PhD candidate @ KAIST
Deep LearningNatural Language ProcessingAudioVision-LanguageUncertainty
Eunseop Yoon
Eunseop Yoon
KAIST
Deep learning
M
M. Hasegawa-Johnson
University of Illinois Urbana-Champaign (UIUC), USA
Sungwoong Kim
Sungwoong Kim
Associate Professor, Korea University
artificial general intelligence
C
C. Yoo
Korea Advanced Institute of Science and Technology (KAIST), Republic of Korea