🤖 AI Summary
Can base language models (LLMs) achieve reasoning capabilities comparable to reinforcement learning (RL)-fine-tuned models—without any additional training, solely through inference-time sampling?
Method: We propose an MCMC-inspired iterative sampling algorithm that constructs a Markov chain using the base model’s own token-level likelihoods, dynamically concentrating sampling effort on high-probability reasoning paths during inference.
Contribution/Results: This work provides the first empirical evidence that strong latent reasoning capacity exists intrinsically in base LLMs and can be unlocked purely via sampling. Unlike RL-based methods, our approach avoids diversity collapse, requires no labeled data, reward models, validators, or parameter updates. On single-sample reasoning benchmarks—including MATH500, HumanEval, and GPQA—it matches or exceeds the performance of RLHF- and GRPO-fine-tuned models, while preserving rich sample diversity across multiple generations. The method is fully general, training-free, and deployment-friendly.
📝 Abstract
Frontier reasoning models have exhibited incredible capabilities across a wide array of disciplines, driven by posttraining large language models (LLMs) with reinforcement learning (RL). However, despite the widespread success of this paradigm, much of the literature has been devoted to disentangling truly novel behaviors that emerge during RL but are not present in the base models. In our work, we approach this question from a different angle, instead asking whether comparable reasoning capabilites can be elicited from base models at inference time by pure sampling, without any additional training. Inspired by Markov chain Monte Carlo (MCMC) techniques for sampling from sharpened distributions, we propose a simple iterative sampling algorithm leveraging the base models'own likelihoods. Over different base models, we show that our algorithm offers substantial boosts in reasoning that nearly match and even outperform those from RL on a wide variety of single-shot tasks, including MATH500, HumanEval, and GPQA. Moreover, our sampler avoids the collapse in diversity over multiple samples that is characteristic of RL-posttraining. Crucially, our method does not require training, curated datasets, or a verifier, suggesting broad applicability beyond easily verifiable domains.