Improving Multi-Step Reasoning Abilities of Large Language Models with Direct Advantage Policy Optimization

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the training instability and inefficiency of reinforcement learning (RL) for large language models (LLMs) in multi-step reasoning—caused by sparse, end-of-sequence rewards—this paper proposes DAPO, a step-level offline RL algorithm. DAPO introduces a decoupled Actor-Critic architecture, wherein a fine-grained step-level Critic is trained independently to model per-step reasoning accuracy as a learnable, dense advantage signal—replacing conventional sparse terminal rewards. By adopting an offline RL paradigm, DAPO eliminates costly online environment interaction. It is the first method to enable advantage-driven, step-level policy optimization for LLMs. Empirically, DAPO achieves significant improvements over supervised fine-tuning (SFT) and state-of-the-art RL baselines on mathematical reasoning benchmarks (e.g., MATH, AMC) and code generation benchmarks (e.g., HumanEval, MBPP), demonstrating both the effectiveness and robustness of dense step-level advantages in enhancing LLMs’ complex reasoning capabilities.

Technology Category

Application Category

📝 Abstract
The role of reinforcement learning (RL) in enhancing the reasoning of large language models (LLMs) is becoming increasingly significant. Despite the success of RL in many scenarios, there are still many challenges in improving the reasoning of LLMs. One challenge is the sparse reward, which makes optimization difficult for RL and necessitates a large amount of data samples. Another challenge stems from the inherent instability of RL, particularly when using Actor-Critic (AC) methods to derive optimal policies, which often leads to unstable training processes. To address these issues, we introduce Direct Advantage Policy Optimization (DAPO), an novel step-level offline RL algorithm. Unlike standard alignment that rely solely outcome rewards to optimize policies (such as DPO), DAPO employs a critic function to predict the reasoning accuracy at each step, thereby generating dense signals to refine the generation strategy. Additionally, the Actor and Critic components in DAPO are trained independently, avoiding the co-training instability observed in standard AC algorithms like PPO. We train DAPO on mathematical and code query datasets and then evaluate its performance on multiple benchmarks. Our results show that DAPO can effectively enhance the mathematical and code capabilities on both SFT models and RL models, demonstrating the effectiveness of DAPO.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Large Language Models
Training Instability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct Advantage Policy Optimization
Auxiliary Prediction Mechanism
Actor-Critic Decoupling
🔎 Similar Papers
No similar papers found.
Jiacai Liu
Jiacai Liu
Fudan University
reinforcement learning
C
Chaojie Wang
Skywork AI, Kunlun Inc.
Chris Yuhao Liu
Chris Yuhao Liu
University of California, Santa Cruz
post-trainingreward modelingreasoning
L
Liang Zeng
Skywork AI, Kunlun Inc.
R
Rui Yan
Skywork AI, Kunlun Inc.
Yiwen Sun
Yiwen Sun
Institute for AI, Peking University
Intelligent Transportation SystemSpatiotemporal data miningSequence learning
Y
Yang Liu
Skywork AI, Kunlun Inc.
Y
Yahui Zhou
Skywork AI, Kunlun Inc.