Breaking the Exploration Bottleneck: Rubric-Scaffolded Reinforcement Learning for General LLM Reasoning

📅 2025-08-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face a reasoning bottleneck in reinforcement learning (RL): insufficient exploration leads to low-quality samples, which in turn constrains learning. To address this, we propose Rubric-Scaffolded RL (RuscaRL), a framework that employs verifiable, progressively decaying rubric-based scoring criteria as external guidance and reward signals—thereby decoupling exploration from learning dependencies. RuscaRL integrates LLM-as-a-Judge for automated sample evaluation and implements a progressive de-scaffolding strategy to internalize robust reasoning patterns. On HealthBench-500, RuscaRL boosts Qwen-2.5-7B-Instruct’s accuracy from 23.6% to 50.3%, surpassing GPT-4; Qwen3-30B-A3B-Instruct achieves 61.1%, outperforming leading models including OpenAI-o3. This work is the first to explicitly embed structured, interpretable rubrics into the RL training pipeline, enabling controllable generation of high-quality reasoning samples and efficient policy optimization.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have underscored the potential of Reinforcement Learning (RL) to facilitate the emergence of reasoning capabilities. Despite the encouraging results, a fundamental dilemma persists as RL improvement relies on learning from high-quality samples, yet the exploration for such samples remains bounded by the inherent limitations of LLMs. This, in effect, creates an undesirable cycle in which what cannot be explored cannot be learned. In this work, we propose Rubric-Scaffolded Reinforcement Learning (RuscaRL), a novel instructional scaffolding framework designed to break the exploration bottleneck for general LLM reasoning. Specifically, RuscaRL introduces checklist-style rubrics as (1) explicit scaffolding for exploration during rollout generation, where different rubrics are provided as external guidance within task instructions to steer diverse high-quality responses. This guidance is gradually decayed over time, encouraging the model to internalize the underlying reasoning patterns; (2) verifiable rewards for exploitation during model training, where we can obtain robust LLM-as-a-Judge scores using rubrics as references, enabling effective RL on general reasoning tasks. Extensive experiments demonstrate the superiority of the proposed RuscaRL across various benchmarks, effectively expanding reasoning boundaries under the best-of-N evaluation. Notably, RuscaRL significantly boosts Qwen-2.5-7B-Instruct from 23.6 to 50.3 on HealthBench-500, surpassing GPT-4.1. Furthermore, our fine-tuned variant on Qwen3-30B-A3B-Instruct achieves 61.1 on HealthBench-500, outperforming leading LLMs including OpenAI-o3.
Problem

Research questions and friction points this paper is trying to address.

Addresses LLM exploration bottleneck in reinforcement learning
Proposes rubric-based scaffolding for diverse high-quality responses
Enables verifiable rewards using LLM-as-a-Judge scoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rubric-scaffolded reinforcement learning for LLM reasoning
Checklist rubrics guide exploration and decay over time
Rubrics provide verifiable rewards for model training
🔎 Similar Papers
No similar papers found.
Y
Yang Zhou
Zhejiang University
S
Sunzhu Li
Li Auto Inc.
Shunyu Liu
Shunyu Liu
Nanyang Technological University
Multi-Agent LearningReinforcement LearningLarge Language ModelsPower System Control
W
Wenkai Fang
Zhejiang University
J
Jiale Zhao
Li Auto Inc.
J
Jingwen Yang
The Chinese University of Hong Kong, Shenzhen
J
Jianwei Lv
Li Auto Inc.
K
Kongcheng Zhang
Zhejiang University
Yihe Zhou
Yihe Zhou
Zhejiang University
H
Hengtong Lu
Li Auto Inc.
W
Wei Chen
Li Auto Inc.
Y
Yan Xie
Li Auto Inc.
M
Mingli Song
Zhejiang University