MetaLLM: A High-performant and Cost-efficient Dynamic Framework for Wrapping LLMs

📅 2024-07-15
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
In multi-LLM deployment scenarios, query heterogeneity makes it challenging to jointly optimize accuracy and cost. Method: This paper proposes a dynamic intelligent routing framework that formalizes LLM selection as a budget-constrained multi-armed bandit (MAB) problem, enabling joint online optimization of accuracy and economic efficiency under uncertainty. The framework supports plug-and-play integration of heterogeneous models across vendors and scales, combining online learning, API orchestration, and dynamic policy scheduling for query-level real-time model selection. Contribution/Results: Experiments across GPT, Claude, Titan, and LLaMA demonstrate an average 12.3% accuracy gain and 41.7% API cost reduction over single-model baselines. To our knowledge, this is the first work to formalize LLM routing as a budget-constrained online decision problem, establishing a scalable, verifiable paradigm for cooperative optimization in multi-model serving systems.

Technology Category

Application Category

📝 Abstract
The rapid progress in machine learning (ML) has brought forth many large language models (LLMs) that excel in various tasks and areas. These LLMs come with different abilities and costs in terms of computation or pricing. Since the demand for each query can vary, e.g., because of the queried domain or its complexity, defaulting to one LLM in an application is not usually the best choice, whether it is the biggest, priciest, or even the one with the best average test performance. Consequently, picking the right LLM that is both accurate and cost-effective for an application remains a challenge. In this paper, we introduce MetaLLM, a framework that dynamically and intelligently routes each query to the optimal LLM (among several available LLMs) for classification tasks, achieving significantly improved accuracy and cost-effectiveness. By framing the selection problem as a multi-armed bandit, MetaLLM balances prediction accuracy and cost efficiency under uncertainty. Our experiments, conducted on popular LLM platforms such as OpenAI's GPT models, Amazon's Titan, Anthropic's Claude, and Meta's LLaMa, showcase MetaLLM's efficacy in real-world scenarios, laying the groundwork for future extensions beyond classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Dynamically selecting optimal LLM for varying query demands
Balancing accuracy and cost efficiency in LLM selection
Improving performance in classification and multi-choice QA tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic routing of queries to optimal LLMs
Multi-armed bandit for balancing accuracy and cost
Improved accuracy and cost-effectiveness in real-world scenarios
🔎 Similar Papers
No similar papers found.
Q
Quang H. Nguyen
College of Engineering and Computer Science, VinUniversity, Vietnam
D
Duy C. Hoang
College of Engineering and Computer Science, VinUniversity, Vietnam
J
Juliette Decugis
College of Engineering and Computer Science, VinUniversity, Vietnam
S
Saurav Manchanda
Amazon, USA
N
N. Chawla
University of Notre Dame
K
Khoa D. Doan
College of Engineering and Computer Science, VinUniversity, Vietnam