How to Allocate, How to Learn? Dynamic Rollout Allocation and Advantage Modulation for Policy Optimization

πŸ“… 2026-02-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of uneven rollout allocation and dynamic imbalance in policy optimization that hinder reinforcement learning for reasoning tasksβ€”such as uniform rollouts disregarding gradient variance disparities, softmax-induced gradient attenuation for high-confidence actions, and training instability. To tackle these issues, the authors propose DynaMO, a framework that dynamically allocates rollouts at the sequence level by minimizing gradient variance, introduces advantage modulation at the token level to compensate for gradient decay, and stabilizes update magnitudes through entropy variation monitoring. The key contributions include the first theoretical derivation of a Bernoulli-variance-based rollout allocation criterion and a novel gradient-aware advantage modulation mechanism. Experiments demonstrate that DynaMO significantly outperforms existing RLVR methods across multiple mathematical reasoning benchmarks, achieving both high efficiency and robustness.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has proven effective for Large Language Model (LLM) reasoning, yet current methods face key challenges in resource allocation and policy optimization dynamics: (i) uniform rollout allocation ignores gradient variance heterogeneity across problems, and (ii) the softmax policy structure causes gradient attenuation for high-confidence correct actions, while excessive gradient updates may destabilize training. Therefore, we propose DynaMO, a theoretically-grounded dual-pronged optimization framework. At the sequence level, we prove that uniform allocation is suboptimal and derive variance-minimizing allocation from the first principle, establishing Bernoulli variance as a computable proxy for gradient informativeness. At the token level, we develop gradient-aware advantage modulation grounded in theoretical analysis of gradient magnitude bounds. Our framework compensates for gradient attenuation of high-confidence correct actions while utilizing entropy changes as computable indicators to stabilize excessive update magnitudes. Extensive experiments conducted on a diverse range of mathematical reasoning benchmarks demonstrate consistent improvements over strong RLVR baselines. Our implementation is available at: \href{https://anonymous.4open.science/r/dynamo-680E/README.md}{https://anonymous.4open.science/r/dynamo}.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning with Verifiable Rewards
rollout allocation
gradient variance
policy optimization
gradient attenuation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Rollout Allocation
Advantage Modulation
Gradient Variance Minimization
Policy Optimization
Reinforcement Learning with Verifiable Rewards
πŸ”Ž Similar Papers
No similar papers found.
Y
Yangyi Fang
Meituan, Tsinghua University
J
Jiaye Lin
Meituan
X
Xiaoliang Fu
Meituan, Fudan University
C
Cong Qin
Meituan, Peking University
Haolin Shi
Haolin Shi
University of Science and Technology of China
3D AIGCComputer Vision
C
Chaowen Hu
Meituan
Lu Pan
Lu Pan
tencent
ηŸ₯识图谱、事仢图谱、η₯žη»η½‘θ·―γ€ζ–‡ζœ¬η”Ÿζˆ
K
Ke Zeng
Meituan
X
Xunliang Cai
Meituan