🤖 AI Summary
This work addresses the challenge of enabling large language models (LLMs) to autonomously improve mathematical reasoning capabilities in a fully unsupervised, zero-label setting. Method: We propose a self-iterative learning paradigm grounded in recursive problem decomposition and verifiability-driven reward signals. It comprises (1) self-supervised reinforcement learning via autonomous generation and solving of progressively simplified problem variants; and (2) test-time reinforcement learning (TTRL), which dynamically decomposes queries into subproblems and optimizes reasoning policies on-the-fly during inference—without external supervision or annotated data. Rewards are derived solely from intrinsic problem verifiability. Contribution/Results: Our approach achieves dramatic gains: Llama-3B’s integration accuracy rises from 1% to 82%; Llama-7B attains 70% on the MIT Integration Bee, improving to 85% with TTRL—surpassing o1. This establishes a novel, difficulty-adaptive, verification-guided autonomous learning paradigm.
📝 Abstract
We introduce LADDER (Learning through Autonomous Difficulty-Driven Example Recursion), a framework enabling LLMs to autonomously improve their problem-solving capabilities through self-guided learning. By recursively generating and solving progressively simpler variants of complex problems, LADDER enables models to progressively learn through reinforcement learning how to solve harder problems. This self-improvement process is guided by verifiable reward signals, allowing the model to assess its solutions. Unlike prior approaches requiring curated datasets or human feedback, LADDER leverages the model's own capabilities to easier variants of sample questions. We demonstrate LADDER's effectiveness on mathematical integration tasks, where it improves a Llama 3B model's accuracy from 1% to 82% on undergraduate-level problems and enables a 7B parameter model to achieve state-of-the-art performance (70%) on the MIT Integration Bee examination for it's model size. We also introduce TTRL (Test-Time Reinforcement Learning), a method that generates variants of test problems at inference time and applies reinforcement learning to further improve performance. By further creating and solving related problems during testing, TTRL enables the 7B model to achieve a score of 85%, surpassing o1. These results showcase how strategic self-directed learning can achieve significant capability improvements without relying on architectural scaling or human supervision.