AlignDistil: Token-Level Language Model Alignment as Adaptive Policy Distillation

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM alignment methods—such as RLHF and DPO—rely on sparse, response-level rewards, ignoring fine-grained token-level quality variations. This leads to erroneous penalization of high-quality tokens or amplification of low-quality ones, causing optimization bias and slow convergence. To address this, we propose a token-level reward distillation framework for alignment. First, we establish the theoretical equivalence between DPO and RLHF objectives. Next, we design a contrastive DPO reward mechanism coupled with adaptive logit extrapolation to construct a fine-grained, dynamic token-level teacher distribution. Finally, we unify DPO, policy distillation, contrastive learning, and adaptive logit scaling into a novel token-level distribution distillation objective. Experiments across multiple benchmarks demonstrate that our method significantly outperforms both RLHF and DPO in alignment quality and training efficiency, achieves faster convergence, and effectively mitigates token-level optimization distortion.

Technology Category

Application Category

📝 Abstract
In modern large language models (LLMs), LLM alignment is of crucial importance and is typically achieved through methods such as reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO). However, in most existing methods for LLM alignment, all tokens in the response are optimized using a sparse, response-level reward or preference annotation. The ignorance of token-level rewards may erroneously punish high-quality tokens or encourage low-quality tokens, resulting in suboptimal performance and slow convergence speed. To address this issue, we propose AlignDistil, an RLHF-equivalent distillation method for token-level reward optimization. Specifically, we introduce the reward learned by DPO into the RLHF objective and theoretically prove the equivalence between this objective and a token-level distillation process, where the teacher distribution linearly combines the logits from the DPO model and a reference model. On this basis, we further bridge the accuracy gap between the reward from the DPO model and the pure reward model, by building a contrastive DPO reward with a normal and a reverse DPO model. Moreover, to avoid under- and over-optimization on different tokens, we design a token adaptive logit extrapolation mechanism to construct an appropriate teacher distribution for each token. Experimental results demonstrate the superiority of our AlignDistil over existing methods and showcase fast convergence due to its token-level distributional reward optimization.
Problem

Research questions and friction points this paper is trying to address.

Optimizes token-level rewards in LLM alignment
Addresses suboptimal performance and slow convergence
Introduces adaptive logit extrapolation for token optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-level reward optimization via RLHF distillation
Contrastive DPO reward bridging accuracy gap
Token adaptive logit extrapolation for optimal distribution
🔎 Similar Papers
No similar papers found.
Songming Zhang
Songming Zhang
Beijing Jiaotong University
natural language processingtext generationmachine translation
X
Xue Zhang
Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing, China
T
Tong Zhang
Tencent Inc, China
Bojie Hu
Bojie Hu
Tencent
natural language processingmachine translation
Y
Yufeng Chen
Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing, China
Jinan Xu
Jinan Xu
Professor of School of Computer and Information Technology, Beijing Jiaotong University
NLPMachine TranslationLLM