🤖 AI Summary
Existing LLM benchmarks rely on static, solved tasks (e.g., math problems), failing to measure or incentivize genuine scientific progress. Method: We propose a “progress-oriented benchmark” paradigm centered on advancing scientific understanding, replacing static test sets with reproducible, verifiable dynamic training environments (e.g., the NanoGPT Speedrun). Our framework emphasizes scientifically meaningful gains in training efficiency and loss reduction, integrating runtime validation, anti-cheating mechanisms, and fine-grained telemetry; it standardizes data splits, reference models, and training infrastructure to enable real-time loss monitoring, convergence verification, and frontier analysis of training efficiency. Contribution/Results: On the NanoGPT Speedrun, we achieve a new SOTA—reducing training time by 3 seconds—and report the first empirical observation of spontaneous emergence of algorithmic insight. This work catalyzes a community shift toward open, quantifiable, research-grade benchmarking practices.
📝 Abstract
Current benchmarks that test LLMs on static, already-solved problems (e.g., math word problems) effectively demonstrated basic capability acquisition. The natural progression has been toward larger, more comprehensive and challenging collections of static problems, an approach that inadvertently constrains the kinds of advances we can measure and incentivize. To address this limitation, we argue for progress-oriented benchmarks, problem environments whose objectives are themselves the core targets of scientific progress, so that achieving state of the art on the benchmark advances the field. As a introductory step, we instantiate an environment based on the NanoGPT speedrun. The environment standardizes a dataset slice, a reference model and training harness, and rich telemetry, with run-time verification and anti-gaming checks. Evaluation centers on the scientific delta achieved: best-attained loss and the efficiency frontier. Using this environment, we achieve a new state-of-the-art training time, improving upon the previous record by 3 seconds, and qualitatively observe the emergence of novel algorithmic ideas. Moreover, comparisons between models and agents remain possible, but they are a means, not the end; the benchmark's purpose is to catalyze reusable improvements to the language modeling stack. With this release, the overarching goal is to seed a community shift from static problem leaderboards to test-time research on open-ended yet measurable scientific problems. In this new paradigm, progress on the benchmark is progress on the science, thus reframing "benchmarking" as a vehicle for scientific advancement.