IsoCompute Playbook: Optimally Scaling Sampling Compute for LLM RL

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of principled guidelines for optimal computational resource allocation in reinforcement learning (RL) post-training of large language models (LLMs). Focusing on the joint optimization of the number of parallel trajectories, problems per batch, and update steps under a fixed compute budget, the study uncovers a saturation effect wherein the optimal number of parallel trajectories plateaus as the budget increases, with distinct underlying mechanisms observed between easy and hard tasks. Within an online policy gradient RL framework, the authors conduct systematic ablations and validate findings across diverse base models and data distributions. They propose a practical, efficient resource allocation strategy that consistently enhances both sample and computational efficiency, offering actionable guidance for RL-based post-training of LLMs.

Technology Category

Application Category

📝 Abstract
While scaling laws guide compute allocation for LLM pre-training, analogous prescriptions for reinforcement learning (RL) post-training of large language models (LLMs) remain poorly understood. We study the compute-optimal allocation of sampling compute for on-policy RL methods in LLMs, framing scaling as a compute-constrained optimization over three resources: parallel rollouts per problem, number of problems per batch, and number of update steps. We find that the compute-optimal number of parallel rollouts per problem increases predictably with compute budget and then saturates. This trend holds across both easy and hard problems, though driven by different mechanisms: solution sharpening on easy problems and coverage expansion on hard problems. We further show that increasing the number of parallel rollouts mitigates interference across problems, while the number of problems per batch primarily affects training stability and can be chosen within a broad range. Validated across base models and data distributions, our results recast RL scaling laws as prescriptive allocation rules and provide practical guidance for compute-efficient LLM RL post-training.
Problem

Research questions and friction points this paper is trying to address.

sampling compute
reinforcement learning
large language models
compute allocation
scaling laws
Innovation

Methods, ideas, or system contributions that make the work stand out.

compute-optimal scaling
parallel rollouts
RL post-training
sampling compute allocation
LLM reinforcement learning
🔎 Similar Papers