🤖 AI Summary
To address the overreliance on large-scale demonstration data in enhancing large language models’ (LLMs) reasoning capabilities, this paper proposes LPPO—a sample-centric, progressive optimization framework. LPPO shifts focus from brute-force data scaling to efficient utilization of a small set of trusted, high-quality demonstrations. Its core contributions are: (1) prefix-guided sampling, which dynamically augments training data using partial solution prefixes; (2) a learning-progress-weighting mechanism based on exponential moving average, enabling adaptive adjustment of sample importance during training; and (3) a reinforcement learning framework integrated with verifiable reward signals. Evaluated on mathematical reasoning benchmarks, LPPO consistently outperforms strong baselines, achieving both faster convergence and higher asymptotic performance. The framework demonstrates that strategic curation and dynamic reweighting of high-fidelity demonstrations yield substantial gains—without requiring extensive data expansion or architectural modifications.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has recently advanced the reasoning capabilities of large language models (LLMs). While prior work has emphasized algorithmic design, data curation, and reward shaping, we investigate RLVR from a sample-centric perspective and introduce LPPO (Learning-Progress and Prefix-guided Optimization), a framework of progressive optimization techniques. Our work addresses a critical question: how to best leverage a small set of trusted, high-quality demonstrations, rather than simply scaling up data volume. First, motivated by how hints aid human problem-solving, we propose prefix-guided sampling, an online data augmentation method that incorporates partial solution prefixes from expert demonstrations to guide the policy, particularly for challenging instances. Second, inspired by how humans focus on important questions aligned with their current capabilities, we introduce learning-progress weighting, a dynamic strategy that adjusts each training sample's influence based on model progression. We estimate sample-level learning progress via an exponential moving average of per-sample pass rates, promoting samples that foster learning and de-emphasizing stagnant ones. Experiments on mathematical-reasoning benchmarks demonstrate that our methods outperform strong baselines, yielding faster convergence and a higher performance ceiling.