🤖 AI Summary
Large reasoning models (LRMs) exhibit redundancy and inefficiency—and sometimes even degrade accuracy—when processing simple queries.
Method: This paper proposes a training-free, model-agnostic dynamic routing framework that intelligently orchestrates LLMs and LRMs. Its core innovation is an uncertainty-aware routing mechanism that jointly leverages conformal prediction (CP) and adaptive full/binary entropy (FBE), establishing statistically grounded decision boundaries for model selection. The method integrates multi-choice question-answering (MCQA) prompting with a multi-model-compatible architecture, enabling threshold adaptation and cross-task generalization.
Results: Evaluated on MCQA benchmarks spanning mathematics, logic, and Chinese chemistry, the framework significantly reduces token consumption while maintaining or improving accuracy. Further experiments on open-ended QA and heterogeneous model ensembles demonstrate strong robustness and broad applicability across diverse reasoning tasks and model families.
📝 Abstract
Recent advances in Large Reasoning Models (LRMs) have significantly improved long-chain reasoning capabilities over Large Language Models (LLMs). However, LRMs often produce unnecessarily lengthy outputs even for simple queries, leading to inefficiencies or even accuracy degradation compared to LLMs. To overcome this, we propose CP-Router, a training-free and model-agnostic routing framework that dynamically selects between an LLM and an LRM, demonstrated with multiple-choice question answering (MCQA) prompts. The routing decision is guided by the prediction uncertainty estimates derived via Conformal Prediction (CP), which provides rigorous coverage guarantees. To further refine the uncertainty differentiation across inputs, we introduce Full and Binary Entropy (FBE), a novel entropy-based criterion that adaptively selects the appropriate CP threshold. Experiments across diverse MCQA benchmarks, including mathematics, logical reasoning, and Chinese chemistry, demonstrate that CP-Router efficiently reduces token usage while maintaining or even improving accuracy compared to using LRM alone. We also extend CP-Router to diverse model pairings and open-ended QA, where it continues to demonstrate strong performance, validating its generality and robustness.