🤖 AI Summary
This work addresses the challenge of maintaining quality of service (QoS) for small language models (SLMs) under runtime uncertainties such as dynamic workloads and model drift, which often lead to suboptimal trade-offs among latency, energy consumption, and task performance. To this end, the authors propose an adaptive orchestration mechanism based on the MAPE-K architecture that uniquely integrates multi-SLM collaboration with QoS-aware dynamic routing. By continuously monitoring incoming queries and QoS metrics, the system dynamically selects the most suitable model while jointly optimizing resource utilization through integrated caching and memory scheduling strategies. Experimental results demonstrate that, compared to a single large-model baseline, the proposed approach reduces latency by approximately 40% and energy consumption by 50%, while preserving task performance across domains, thereby significantly enhancing overall system efficiency.
📝 Abstract
AI-enabled systems are subjected to various types of runtime uncertainties, ranging from dynamic workloads, resource requirements, model drift, etc. These uncertainties have a big impact on the overall Quality of Service (QoS). This is particularly true in the case of Language Model (LM) enabled systems where the autoregressive nature of token generation introduces variability in latency, energy usage and response quality. These systems, powered by LLMs, are either resource-intensive (if run on-prem) or raise privacy/cost concerns (if leveraged using APIs). While deploying a Small Language Model (SLM) can be resource-efficient, it often falls short in addressing the diversity and scale of real-world requirements. To this, we argue that, rather than relying on any one SLM, leveraging a coordinated fleet of SLMs, each with specialized strengths can enable systems to dynamically adapt to shifting contexts and workload patterns. However, realizing the full potential of such an approach demands intelligent orchestration and continuous adaptation. To this end, we introduce CALM , a self-adaptive orchestration mechanism based on MAPE-K. Our approach continuously monitors user queries, analyzes the QoS metrics of the SLMs, identifies the optimal SLM to be used, routes the query to the identified SLM and further to enhance the effectiveness and efficiency, leverages caching and scheduling to decide the SLMs to be kept in memory. Our evaluation shows that CALM reduces latency by approximately 40% and energy consumption by 50%, while preserving domain-specific task performance when compared to single-LLM baselines.