Improving RL Exploration for LLM Reasoning through Retrospective Replay

📅 2025-04-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often identify high-quality solution candidates early in reinforcement learning (RL) post-training but fail to execute complex reasoning due to insufficient capability; meanwhile, policy gradient updates suppress these early exploratory trajectories irreversibly, hindering their reuse even after subsequent capability gains. Method: We propose Retrospective Replay Learning (RRL), a novel framework featuring dynamic experience caching and state-value backtracking evaluation, which mitigates the irreversible suppression of early exploration by policy gradients and enables cross-phase exploration memory and reuse. Contribution/Results: RRL integrates seamlessly into the RLHF pipeline and significantly improves solution success rates on mathematical reasoning and code generation benchmarks. It concurrently enhances model safety and helpfulness while maintaining high exploration efficiency throughout training.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has increasingly become a pivotal technique in the post-training of large language models (LLMs). The effective exploration of the output space is essential for the success of RL. We observe that for complex problems, during the early stages of training, the model exhibits strong exploratory capabilities and can identify promising solution ideas. However, its limited capability at this stage prevents it from successfully solving these problems. The early suppression of these potentially valuable solution ideas by the policy gradient hinders the model's ability to revisit and re-explore these ideas later. Consequently, although the LLM's capabilities improve in the later stages of training, it still struggles to effectively address these complex problems. To address this exploration issue, we propose a novel algorithm named Retrospective Replay-based Reinforcement Learning (RRL), which introduces a dynamic replay mechanism throughout the training process. RRL enables the model to revisit promising states identified in the early stages, thereby improving its efficiency and effectiveness in exploration. To evaluate the effectiveness of RRL, we conduct extensive experiments on complex reasoning tasks, including mathematical reasoning and code generation, and general dialogue tasks. The results indicate that RRL maintains high exploration efficiency throughout the training period, significantly enhancing the effectiveness of RL in optimizing LLMs for complicated reasoning tasks. Moreover, it also improves the performance of RLHF, making the model both safer and more helpful.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RL exploration for LLM reasoning tasks
Addressing early suppression of valuable solution ideas
Improving RL efficiency via retrospective replay mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrospective Replay-based Reinforcement Learning (RRL)
Dynamic replay mechanism for exploration
Enhances RL efficiency in LLM training
🔎 Similar Papers
No similar papers found.
Shihan Dou
Shihan Dou
Fudan University
LLMsCode LMsRLAlignment
Muling Wu
Muling Wu
Fudan University
J
Jingwen Xu
School of Computer Science, Fudan University
R
Rui Zheng
School of Computer Science, Fudan University
T
Tao Gui
Institute of Modern Languages and Linguistics, Fudan University
Q
Qi Zhang
School of Computer Science, Fudan University
X
Xuanjing Huang
School of Computer Science, Fudan University