GrandCode: Achieving Grandmaster Level in Competitive Programming via Agentic Reinforcement Learning

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the longstanding challenge of achieving consistent superhuman performance in competitive programming, where artificial intelligence has historically struggled to surpass elite human contestants. The authors propose a multi-agent reinforcement learning framework that orchestrates modular collaboration among specialized agents responsible for hypothesis generation, solution synthesis, testing, and summarization. By integrating post-training with online reinforcement learning and introducing Agentic GRPO—an algorithm specifically designed for multi-stage reasoning and delayed rewards—the system effectively mitigates off-policy distributional shift. Evaluated in real-world conditions, the approach secured first place in three consecutive Codeforces contests in March 2026, marking the first instance of an AI system outperforming all human participants, including legendary grandmasters, in authentic competitive programming environments.
📝 Abstract
Competitive programming remains one of the last few human strongholds in coding against AI. The best AI system to date still underperforms the best humans competitive programming: the most recent best result, Google's Gemini~3 Deep Think, attained 8th place even not being evaluated under live competition conditions. In this work, we introduce GrandCode, a multi-agent RL system designed for competitive programming. The capability of GrandCode is attributed to two key factors: (1) It orchestrates a variety of agentic modules (hypothesis proposal, solver, test generator, summarization, etc) and jointly improves them through post-training and online test-time RL; (2) We introduce Agentic GRPO specifically designed for multi-stage agent rollouts with delayed rewards and the severe off-policy drift that is prevalent in agentic RL. GrandCode is the first AI system that consistently beats all human participants in live contests of competitive programming: in the most recent three Codeforces live competitions, i.e., Round~1087 (Mar 21, 2026), Round~1088 (Mar 28, 2026), and Round~1089 (Mar 29, 2026), GrandCode placed first in all of them, beating all human participants, including legendary grandmasters. GrandCode shows that AI systems have reached a point where they surpass the strongest human programmers on the most competitive coding tasks.
Problem

Research questions and friction points this paper is trying to address.

competitive programming
artificial intelligence
grandmaster level
human-AI comparison
coding competition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic Reinforcement Learning
Multi-agent System
Competitive Programming
GRPO
Code Generation
🔎 Similar Papers
No similar papers found.
Xiaoya Li
Xiaoya Li
University of Washington
Xiaofei Sun
Xiaofei Sun
Stony Brook University, Zhejiang University
Social and Information NetworkNatural Language ProcessingMachine Learning
G
Guoyin Wang
independent researcher
S
Songqiao Su
C
Chris Shum
J
Jiwei Li
D
DeepReinforce Team