TraPO: A Semi-Supervised Reinforcement Learning Framework for Boosting LLM Reasoning

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high cost of human reward annotation and the late-stage collapse prevalent in unsupervised methods for training Large Reasoning Models (LRMs) via reinforcement learning, this paper proposes RLVR—a novel semi-supervised reinforcement learning paradigm. Its core innovation is the first introduction of a learned trajectory similarity matching mechanism, which leverages a small set of expert-annotated trajectories to guide policy optimization on unlabeled samples, coupled with an internal-consistency-based reward function integrating entropy regularization and majority voting. With only 10% labeled data (e.g., 1K annotated + 3K unlabeled trajectories), RLVR achieves a 42.6% average accuracy across six mathematical reasoning and three out-of-distribution (OOD) benchmarks—substantially outperforming a 45K-unlabeled baseline. Under a larger-scale configuration (4K labeled + 12K unlabeled), RLVR comprehensively surpasses a fully supervised 45K-labeled model.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has proven effective in training large reasoning models (LRMs) by leveraging answer-verifiable signals to guide policy optimization, which, however, suffers from high annotation costs. To alleviate this problem, recent work has explored unsupervised RLVR methods that derive rewards solely from the model's internal consistency, such as through entropy and majority voting. While seemingly promising, these methods often suffer from model collapse in the later stages of training, which may arise from the reinforcement of incorrect reasoning patterns in the absence of external supervision. In this work, we investigate a novel semi-supervised RLVR paradigm that utilizes a small labeled set to guide RLVR training on unlabeled samples. Our key insight is that supervised rewards are essential for stabilizing consistency-based training on unlabeled samples, ensuring that only reasoning patterns verified on labeled instances are incorporated into RL training. Technically, we propose an effective policy optimization algorithm, TraPO, that identifies reliable unlabeled samples by matching their learning trajectory similarity to labeled ones. Building on this, TraPO achieves remarkable data efficiency and strong generalization on six widely used mathematical reasoning benchmarks (AIME24/25, AMC, MATH-500, Minerva, and Olympiad) and three out-of-distribution tasks (ARC-c, GPQA-diamond, and MMLU-pro). With only 1K labeled and 3K unlabeled samples, TraPO reaches 42.6% average accuracy, surpassing the best unsupervised method trained on 45K unlabeled samples (38.3%). Notably, when using 4K labeled and 12K unlabeled samples, TraPO even outperforms the fully supervised model trained on the full 45K labeled samples on all benchmarks, while using only 10% of the labeled data. The code is available via https://github.com/ShenzhiYang2000/TRAPO.
Problem

Research questions and friction points this paper is trying to address.

Reduces annotation costs in reinforcement learning for reasoning models
Prevents model collapse in unsupervised reward-based training
Enhances data efficiency and generalization with semi-supervised approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-supervised RLVR uses labeled data to guide unlabeled training
TraPO matches learning trajectories to identify reliable unlabeled samples
Achieves high accuracy with minimal labeled data on reasoning benchmarks
🔎 Similar Papers
No similar papers found.
Shenzhi Yang
Shenzhi Yang
Zhejiang University
machine learninglearning theorylarge language models
G
Guangcheng Zhu
Zhejiang University
Xing Zheng
Xing Zheng
Ph.D. of University of California, Riverside
Sensor fusionSLAMVIO
Y
Yingfan MA
Ant Group
Z
Zhongqi Chen
Ant Group
B
Bowen Song
Ant Group
W
Weiqiang Wang
Ant Group
J
Junbo Zhao
Zhejiang University
G
Gang Chen
Zhejiang University
Haobo Wang
Haobo Wang
Zhejiang University
Machine Learning