🤖 AI Summary
Existing LLM evaluation relies on Pareto frontiers, making cross-model comparison of accuracy–cost trade-offs—e.g., low-latency/high-error versus high-accuracy/high-overhead—challenging. This paper proposes the first economics-driven evaluation framework that unifies accuracy, latency, and error rates into a dollar-denominated cost metric, enabling realistic, deployment-oriented model comparison. We innovatively model error cost explicitly and conduct systematic economic analysis across reasoning/non-reasoning models and cascaded architectures using the MATH benchmark. Results show that reasoning models become economically preferable when per-error cost exceeds $0.01; moreover, a single strong model typically outperforms cascaded strategies. By transcending conventional multi-objective optimization, our framework reveals error cost as a decisive factor in model selection and provides actionable, quantifiable guidance for LLM deployment decisions.
📝 Abstract
Practitioners often navigate LLM performance trade-offs by plotting Pareto frontiers of optimal accuracy-cost trade-offs. However, this approach offers no way to compare between LLMs with distinct strengths and weaknesses: for example, a cheap, error-prone model vs a pricey but accurate one. To address this gap, we propose economic evaluation of LLMs. Our framework quantifies the performance trade-off of an LLM as a single number based on the economic constraints of a concrete use case, all expressed in dollars: the cost of making a mistake, the cost of incremental latency, and the cost of abstaining from a query. We apply our economic evaluation framework to compare the performance of reasoning and non-reasoning models on difficult questions from the MATH benchmark, discovering that reasoning models offer better accuracy-cost tradeoffs as soon as the economic cost of a mistake exceeds $0.01. In addition, we find that single large LLMs often outperform cascades when the cost of making a mistake is as low as $0.1. Overall, our findings suggest that when automating meaningful human tasks with AI models, practitioners should typically use the most powerful available model, rather than attempt to minimize AI deployment costs, since deployment costs are likely dwarfed by the economic impact of AI errors.