π€ AI Summary
To address over-optimization (reward hacking) and low alignment efficiency caused by uniform global token-level adjustments in large language model preference optimization, this paper proposes Confidence-DPOβa novel paradigm that identifies and optimizes only preference-critical tokens using the policy modelβs intrinsic token-level confidence scores. Without auxiliary models or additional computational overhead, Confidence-DPO dynamically reweights the preference loss at the token level via confidence-based assessment, enabling credit-free, fine-grained critical-token selection and optimization within a pure supervised framework. Evaluated on AlpacaEval 2 and Arena-Hard, it significantly outperforms DPO and other baselines across multiple LLMs, improving both alignment quality and KL-divergence budget utilization efficiency. To our knowledge, this is the first method achieving zero-overhead, high-precision, token-level preference alignment.
π Abstract
We introduce ConfPO, a method for preference learning in Large Language Models (LLMs) that identifies and optimizes preference-critical tokens based solely on the training policy's confidence, without requiring any auxiliary models or compute. Unlike prior Direct Alignment Algorithms (DAAs) such as Direct Preference Optimization (DPO), which uniformly adjust all token probabilities regardless of their relevance to preference, ConfPO focuses optimization on the most impactful tokens. This targeted approach improves alignment quality while mitigating overoptimization (i.e., reward hacking) by using the KL divergence budget more efficiently. In contrast to recent token-level methods that rely on credit-assignment models or AI annotators, raising concerns about scalability and reliability, ConfPO is simple, lightweight, and model-free. Experimental results on challenging alignment benchmarks, including AlpacaEval 2 and Arena-Hard, demonstrate that ConfPO consistently outperforms uniform DAAs across various LLMs, delivering better alignment with zero additional computational overhead.