🤖 AI Summary
Large language models (LLMs) suffer from logical errors and redundant reasoning in inference tasks due to insufficient process-level supervision. Method: This paper proposes a novel process supervision paradigm centered on nonlinear reward modeling. It is the first to uncover the nonlinear coupling between chain-of-thought (CoT) length and accuracy in determining overall reward. We introduce PSPO*, a unified framework integrating reward modeling and policy optimization, and its instantiation PSPO-WRS, which employs a step-aware corrected Weibull distribution for dynamic reward shaping. Contribution/Results: Evaluated on six mathematical reasoning benchmarks, our approach significantly improves both final answer accuracy and step efficiency. Empirical results demonstrate the effectiveness and generalizability of nonlinear process supervision for aligning LLM reasoning behavior with human expectations.
📝 Abstract
Process supervision enhances the performance of large language models in reasoning tasks by providing feedback at each step of chain-of-thought reasoning. However, due to the lack of effective process supervision methods, even advanced large language models are prone to logical errors and redundant reasoning. We claim that the effectiveness of process supervision significantly depends on both the accuracy and the length of reasoning chains. Moreover, we identify that these factors exhibit a nonlinear relationship with the overall reward score of the reasoning process. Inspired by these insights, we propose a novel process supervision paradigm, PSPO*, which systematically outlines the workflow from reward model training to policy optimization, and highlights the importance of nonlinear rewards in process supervision. Based on PSPO*, we develop the PSPO-WRS, which considers the number of reasoning steps in determining reward scores and utilizes an adjusted Weibull distribution for nonlinear reward shaping. Experimental results on six mathematical reasoning datasets demonstrate that PSPO-WRS consistently outperforms current mainstream models.