🤖 AI Summary
To address computational inefficiency in large language model (LLM) inference caused by redundant reasoning trajectories during test-time scaling, this paper introduces the first standardized benchmark for speculative decoding. The benchmark systematically evaluates three classes of speculation strategies—model-driven, training-driven, and n-gram-driven—under canonical test-time scaling paradigms including Best-of-N sampling and multi-step chain-of-thought reasoning. Its core contribution is the empirical identification of n-gram methods’ unique efficacy in detecting and accelerating structured, repetitive reasoning paths, alongside the proposal of a hybrid speculation strategy that jointly optimizes inference diversity and decoding efficiency. Experiments demonstrate that the proposed approach substantially reduces redundant computation, achieving measurable improvements in throughput and hardware resource utilization across multiple benchmarks. This work provides both a reproducible evaluation framework and practical techniques for efficient test-time scaling.
📝 Abstract
Test-time scaling has emerged as a powerful paradigm for enhancing the reasoning capabilities of large language models (LLMs) by allocating additional computational resources during inference. However, this paradigm is inherently inefficient due to the generation of redundant and repetitive reasoning traces, leading to significant computational overhead. Speculative decoding offers a promising avenue for mitigating this inefficiency, yet its efficacy in the structured, repetition-rich context of test-time scaling remains largely unexplored. To bridge this gap, we introduce the first comprehensive benchmark designed to evaluate speculative decoding methods for accelerating LLM test-time scaling. Our benchmark provides consistent experimental protocols across representative test-time scaling paradigms (e.g., Best-of-N sampling and multi-round thinking), enabling a fair comparison of three major categories of speculative decoding: model-based, training-based, and n-gram-based methods. Extensive experiments reveal that simple n-gram-based methods effectively capture repetitive patterns, demonstrating unique potential in accelerating test-time scaling. This phenomenon demonstrates the value of integrating n-gram-based methods with model-based or training-based approaches to balance acceleration for both repetitive and diverse reasoning in test-time scaling. We hope this benchmark spurs further research on speculative decoding for test-time scaling, enabling faster and more practical reasoning in LLMs through better handling of repetitive and diverse reasoning paths.