🤖 AI Summary
To address the high computational cost of large models in machine learning inference—termed “over-computation”—this paper proposes Agreement-Based Cascading (ABC), a multi-stage cascaded inference framework that performs data-adaptive routing via ensemble model agreement. Its core innovation is the first use of inter-layer prediction consistency as a dynamic offloading criterion, enabling difficulty-aware sample routing: easy instances are processed rapidly by lightweight models, while hard instances are progressively escalated to larger models. ABC requires no modification to base models and serves as a plug-and-play replacement for monolithic models. Experiments demonstrate that, in edge-cloud collaborative settings, ABC reduces communication overhead by 14× and cloud resource leasing costs by 3×; in LLM API usage, it cuts token consumption by 2–25×. Crucially, ABC achieves higher accuracy and inference efficiency than the best single-model baseline.
📝 Abstract
Adaptive inference schemes reduce the cost of machine learning inference by assigning smaller models to easier examples, attempting to avoid invocation of larger models when possible. In this work we explore a simple, effective adaptive inference technique we term Agreement-Based Cascading (ABC). ABC builds a cascade of models of increasing size/complexity, and uses agreement between ensembles of models at each level of the cascade as a basis for data-dependent routing. Although ensemble execution introduces additional expense, we show that these costs can be easily offset in practice due to large expected differences in model sizes, parallel inference execution capabilities, and accuracy benefits of ensembling. We examine ABC theoretically and empirically in terms of these parameters, showing that the approach can reliably act as a drop-in replacement for existing models and surpass the best single model it aims to replace in terms of both efficiency and accuracy. Additionally, we explore the performance of ABC relative to existing cascading methods in three common scenarios: (1) edge-to-cloud inference, where ABC reduces communication costs by up to 14x; (2) cloud-based model serving, where it achieves a 3x reduction in rental costs; and (3) inference via model API services, where ABC achieves a 2-25x reduction in average price per token/request relative to state-of-the-art LLM cascades.