๐ค AI Summary
This work addresses the scalability failure observed in existing AlphaZero-based tree search methods for enhancing large language model reasoning, where increased search budgets paradoxically lead to degraded accuracy. To resolve this issue, the authors propose ReSCALE, a novel approach that integrates Gumbel sampling with Sequential Halving into Monte Carlo Tree Search (MCTS), achieving monotonically improving reasoning performance with larger budgetsโwithout any model modification or additional training. Evaluated on GSM8K and Game24 benchmarks, ReSCALE attains accuracies of 58.4% and 85.3%, respectively, substantially outperforming baseline methods. Ablation studies further demonstrate that Sequential Halving is the pivotal innovation enabling effective scaling and enhanced accuracy under high-budget regimes.
๐ Abstract
Neural tree search is a powerful decision-making algorithm widely used in complex domains such as game playing and model-based reinforcement learning. Recent work has applied AlphaZero-style tree search to enhance the reasoning capabilities of Large Language Models (LLMs) during inference, but we find that this approach suffers from a scaling failure: on GSM8K and Game24, accuracy drops as the search budget increases. In this paper, we present ReSCALE, an adaptation of Gumbel AlphaZero MCTS that replaces Dirichlet noise and PUCT selection with Gumbel sampling and Sequential Halving, restoring monotonic scaling without changes to the model or its training. ReSCALE reaches 58.4\% on GSM8K and 85.3\% on Game24 at budgets where the baseline degrades. Ablations confirm that Sequential Halving is the primary driver of the improvement.