🤖 AI Summary
Current large language models (LLMs) rely heavily on human-annotated data for reasoning capability training, hindering their evolution toward superhuman intelligence.
Method: We propose R-Zero, the first fully human-annotation-free self-evolving reasoning framework. It employs a dynamic two-model adversarial paradigm—Challenger and Solver—that autonomously generates reasoning tasks, probes capability boundaries, and establishes a closed-loop reinforcement learning optimization process. R-Zero abandons conventional supervised learning and instead introduces dynamic curriculum learning to enable continuous capability expansion.
Contribution/Results: Evaluated on base models including Qwen3-4B-Base, R-Zero achieves improvements of +6.49 and +7.54 points on mathematical and general reasoning benchmarks, respectively. It significantly enhances zero-shot reasoning generalization without any human supervision, marking a critical step toward autonomous LLM reasoning advancement.
📝 Abstract
Self-evolving Large Language Models (LLMs) offer a scalable path toward super-intelligence by autonomously generating, refining, and learning from their own experiences. However, existing methods for training such models still rely heavily on vast human-curated tasks and labels, typically via fine-tuning or reinforcement learning, which poses a fundamental bottleneck to advancing AI systems toward capabilities beyond human intelligence. To overcome this limitation, we introduce R-Zero, a fully autonomous framework that generates its own training data from scratch. Starting from a single base LLM, R-Zero initializes two independent models with distinct roles, a Challenger and a Solver. These models are optimized separately and co-evolve through interaction: the Challenger is rewarded for proposing tasks near the edge of the Solver capability, and the Solver is rewarded for solving increasingly challenging tasks posed by the Challenger. This process yields a targeted, self-improving curriculum without any pre-existing tasks and labels. Empirically, R-Zero substantially improves reasoning capability across different backbone LLMs, e.g., boosting the Qwen3-4B-Base by +6.49 on math-reasoning benchmarks and +7.54 on general-domain reasoning benchmarks.