🤖 AI Summary
This work addresses the trade-off between performance and cost in large language model (LLM) inference. We propose a lightweight, interpretable, and scalable dynamic routing framework. Its core innovation lies in adapting Item Response Theory (IRT)—a psychometric modeling paradigm—to jointly characterize query difficulty and model capability across varying inference budgets, enabling fine-grained, interpretable capability–difficulty alignment. The framework generalizes to unseen queries and supports plug-and-play integration of new models without retraining. Evaluated on eight mainstream reasoning benchmarks, our method consistently outperforms existing routing strategies across three critical dimensions: inference accuracy, out-of-distribution generalization, and system scalability. Notably, it achieves these gains while maintaining low computational overhead and preserving interpretability through principled psychometric grounding.
📝 Abstract
Reasoning language models have demonstrated remarkable performance on many challenging tasks in math, science, and coding. Choosing the right reasoning model for practical deployment involves a performance and cost tradeoff at two key levels: model size and reasoning budget, where larger models and higher reasoning budget lead to better performance but with increased cost and latency. In this work, we tackle this tradeoff from the angle of model configuration routing for different queries, and present RADAR (Reasoning-Ability and Difficulty-Aware Routing), a lightweight, interpretable, and scalable routing framework. Inspired by psychometrics, RADAR learns an item response model from model responses with different budgets to different queries, with interpretable parameters including query difficulties and model-budget abilities. RADAR then routes queries with higher difficulty to model-budget pairs with higher ability, and vice versa. We conduct extensive experiments on 8 widely used challenging reasoning benchmarks, demonstrating the superior performance of RADAR compared to state-of-the-art model routing methods. RADAR also exhibits query generalization capabilities, showing strong performance on out-of-distribution queries in all benchmarks. RADAR is also scalable and can efficiently integrate additional models by dynamically selecting a small set of evaluation queries to estimate their abilities.