🤖 AI Summary
This work addresses the high computational cost of large reasoning models in generating long reasoning trajectories and their inability to adapt to dynamic variations in internal difficulty during a single generation pass. The authors propose a training-free, segment-level runtime model switching framework that identifies uncertainty in reasoning segments through offline analysis of token probability margins. During inference, it dynamically delegates high-difficulty segments to a large model and switches to a smaller model for low-difficulty segments, integrating speculative decoding to further accelerate execution. This approach achieves coarse-grained, segment-level control to capture intra-trajectory difficulty shifts without requiring supervision signals or additional routing modules. Experiments demonstrate significant latency reduction across multiple reasoning benchmarks, with up to 2.2× end-to-end speedup and less than 2% accuracy degradation.
📝 Abstract
Large reasoning models (LRMs) achieve strong performance on complex reasoning tasks by generating long, multi-step reasoning trajectories, but inference-time scaling incurs substantial deployment cost. A key challenge is that generation difficulty varies within a single output, whereas existing efficiency-oriented approaches either ignore this intra-generation variation or rely on supervised token-level routing with high system complexity. We present \textbf{RelayGen}, a training-free, segment-level runtime model switching framework that exploits difficulty variation in long-form reasoning. Through offline analysis of generation uncertainty using token probability margins, we show that coarse-grained segment-level control is sufficient to capture difficulty transitions within a reasoning trajectory. RelayGen identifies model-specific switch cues that signal transitions to lower-difficulty segments and dynamically delegates their continuation to a smaller model, while preserving high-difficulty reasoning on the large model. Across multiple reasoning benchmarks, RelayGen substantially reduces inference latency while preserving most of the accuracy of large models. When combined with speculative decoding, RelayGen achieves up to 2.2$\times$ end-to-end speedup with less than 2\% accuracy degradation, without requiring additional training or learned routing components.