🤖 AI Summary
Existing LLM serving systems treat resource demands as a black box, leading to inefficient queuing policies and delayed backend prewarming—severely degrading end-to-end performance. To address this, we propose PDGraph, the first unified probabilistic graphical model for characterizing LLM resource demands across diverse workloads. We further introduce Gittins index scheduling—the first application of this optimal stopping theory to LLM serving—to minimize average job completion time. Additionally, we design a dynamic backend prewarming mechanism driven by real-time load prediction. Evaluated across multiple LLM application classes, our approach reduces average completion time by over 70% and P95 completion time by more than 80%, significantly improving service efficiency and response-time determinism.
📝 Abstract
Applications based on Large Language Models (LLMs) contains a series of tasks to address real-world problems with boosted capability, which have dynamic demand volumes on diverse backends. Existing serving systems treat the resource demands of LLM applications as a blackbox, compromising end-to-end efficiency due to improper queuing order and backend warm up latency. We find that the resource demands of LLM applications can be modeled in a general and accurate manner with Probabilistic Demand Graph (PDGraph). We then propose Hermes, which leverages PDGraph for efficient serving of LLM applications. Confronting probabilistic demand description, Hermes applies the Gittins policy to determine the scheduling order that can minimize the average application completion time. It also uses the PDGraph model to help prewarm cold backends at proper moments. Experiments with diverse LLM applications confirm that Hermes can effectively improve the application serving efficiency, reducing the average completion time by over 70% and the P95 completion time by over 80%.