Efficient Serving of LLM Applications with Probabilistic Demand Modeling

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM serving systems treat resource demands as a black box, leading to inefficient queuing policies and delayed backend prewarming—severely degrading end-to-end performance. To address this, we propose PDGraph, the first unified probabilistic graphical model for characterizing LLM resource demands across diverse workloads. We further introduce Gittins index scheduling—the first application of this optimal stopping theory to LLM serving—to minimize average job completion time. Additionally, we design a dynamic backend prewarming mechanism driven by real-time load prediction. Evaluated across multiple LLM application classes, our approach reduces average completion time by over 70% and P95 completion time by more than 80%, significantly improving service efficiency and response-time determinism.

Technology Category

Application Category

📝 Abstract
Applications based on Large Language Models (LLMs) contains a series of tasks to address real-world problems with boosted capability, which have dynamic demand volumes on diverse backends. Existing serving systems treat the resource demands of LLM applications as a blackbox, compromising end-to-end efficiency due to improper queuing order and backend warm up latency. We find that the resource demands of LLM applications can be modeled in a general and accurate manner with Probabilistic Demand Graph (PDGraph). We then propose Hermes, which leverages PDGraph for efficient serving of LLM applications. Confronting probabilistic demand description, Hermes applies the Gittins policy to determine the scheduling order that can minimize the average application completion time. It also uses the PDGraph model to help prewarm cold backends at proper moments. Experiments with diverse LLM applications confirm that Hermes can effectively improve the application serving efficiency, reducing the average completion time by over 70% and the P95 completion time by over 80%.
Problem

Research questions and friction points this paper is trying to address.

Model dynamic LLM application demands for efficient resource allocation
Optimize task scheduling to reduce application completion time
Improve backend warm-up timing using probabilistic demand modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Probabilistic Demand Graph for modeling
Applies Gittins policy for optimal scheduling
Leverages PDGraph for backend prewarming
🔎 Similar Papers
No similar papers found.
Y
Yifei Liu
Shanghai Jiao Tong University, China
Z
Zuo Gan
Shanghai Jiao Tong University, China
Z
Zhen-Ji Gan
W
Weiye Wang
Shanghai Jiao Tong University, China
C
Chen Chen
Shanghai Jiao Tong University, China
Yizhou Shan
Yizhou Shan
Huawei Cloud
DisaggregationOperating SystemDistributed SystemComputer Architecture
Xusheng Chen
Xusheng Chen
Huawei Cloud
Distributed SystemsCloud ComputingDistributed Databases
Z
Zhenhua Han
Unaffiliated, China
Yifei Zhu
Yifei Zhu
Shanghai Jiao Tong University
Edge computingmultimedia networkingdistributed ML systems
S
Shixuan Sun
Shanghai Jiao Tong University, China
Minyi Guo
Minyi Guo
IEEE Fellow, Chair Professor, Shanghai Jiao Tong University
Parallel ComputingCompiler OptimizationCloud ComputingNetworkingBig Data