🤖 AI Summary
To address high tail latency and low resource utilization in LMaaS platforms—caused by delayed resource scheduling under dynamic request loads and variable computational demands—this paper proposes the first hierarchical, prediction-driven scheduling framework. The framework jointly models coarse-grained service-level workloads and fine-grained per-request resource requirements, integrating time-series forecasting, lightweight request feature encoding, online elastic scaling, and prediction-aware intelligent routing. Evaluated on real-world LMaaS production traces, it reduces tail latency by 45.9%, decreases average resource consumption by 44.5%, and incurs only 0.23% overhead. Its core contribution is the first end-to-end, closed-loop prediction-to-scheduling system that jointly perceives both short-term and long-term load dynamics—thereby significantly improving SLO compliance and resource efficiency.
📝 Abstract
Large Language Models (LLMs) have revolutionized fields such as natural language processing and software engineering, fueling the growth of Language-Model-as-a-Service (LMaaS) platforms hosted by industry leaders like OpenAI. These platforms handle millions of queries daily, requiring efficient management to reduce serving latency and meet Service Level Objectives (SLOs) while optimizing resource utilization. However, conventional cloud service management techniques, originally designed for traditional workloads, are suboptimal for LMaaS due to its dynamic service workloads and variable request loads. To address this, we propose PreServe, a tailored LMaaS management framework centered on hierarchical prediction. PreServe incorporates a service workload predictor to estimate periodic token density at a coarse granularity and a novel request load predictor to assess the resource demand of individual LLM requests, enabling the construction of a load anticipator for each LLM instance. By integrating both long-term and short-term predictions, PreServe adjusts resource allocation in advance, mitigating the risks of instance under- or over-provisioning. Moreover, PreServe optimizes request routing by considering both current and anticipated future instance loads, ensuring balanced load distribution across instances. Evaluations on real-world LMaaS production datasets demonstrate that
m outperforms state-of-the-art approaches, achieving over 45.9% reduction in tail latency, an average 44.5% decrease in resource consumption, while incurring only 0.23% additional overhead.