PSPO*: An Effective Process-supervised Policy Optimization for Reasoning Alignment

📅 2024-11-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from logical errors and redundant reasoning in inference tasks due to insufficient process-level supervision. Method: This paper proposes a novel process supervision paradigm centered on nonlinear reward modeling. It is the first to uncover the nonlinear coupling between chain-of-thought (CoT) length and accuracy in determining overall reward. We introduce PSPO*, a unified framework integrating reward modeling and policy optimization, and its instantiation PSPO-WRS, which employs a step-aware corrected Weibull distribution for dynamic reward shaping. Contribution/Results: Evaluated on six mathematical reasoning benchmarks, our approach significantly improves both final answer accuracy and step efficiency. Empirical results demonstrate the effectiveness and generalizability of nonlinear process supervision for aligning LLM reasoning behavior with human expectations.

Technology Category

Application Category

📝 Abstract
Process supervision enhances the performance of large language models in reasoning tasks by providing feedback at each step of chain-of-thought reasoning. However, due to the lack of effective process supervision methods, even advanced large language models are prone to logical errors and redundant reasoning. We claim that the effectiveness of process supervision significantly depends on both the accuracy and the length of reasoning chains. Moreover, we identify that these factors exhibit a nonlinear relationship with the overall reward score of the reasoning process. Inspired by these insights, we propose a novel process supervision paradigm, PSPO*, which systematically outlines the workflow from reward model training to policy optimization, and highlights the importance of nonlinear rewards in process supervision. Based on PSPO*, we develop the PSPO-WRS, which considers the number of reasoning steps in determining reward scores and utilizes an adjusted Weibull distribution for nonlinear reward shaping. Experimental results on six mathematical reasoning datasets demonstrate that PSPO-WRS consistently outperforms current mainstream models.
Problem

Research questions and friction points this paper is trying to address.

Lack of effective process supervision in reasoning tasks
Logical errors and redundant reasoning in language models
Nonlinear relationship between reasoning chain factors and rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Process supervision enhances reasoning step feedback
PSPO* integrates nonlinear rewards for supervision
PSPO-WRS adjusts Weibull distribution for reward shaping
J
Jiawei Li
School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
Xinyue Liang
Xinyue Liang
PhD student of KTH Royal Institute of Technology
Machine learningDistributed learningNeural networks
Yizhe Yang
Yizhe Yang
Beijing Institute of Technology
NLPDialogue
C
Chong Feng
School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
Y
Yang Gao
School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China