FREESH: Fair, Resource- and Energy-Efficient Scheduling for LLM Serving on Heterogeneous GPUs

πŸ“… 2025-11-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To jointly optimize energy efficiency and fairness for LLM inference services on heterogeneous GPU clusters, this paper proposes FREESHβ€”a novel framework that pioneers the integration of spatiotemporal computational flexibility into multidimensional co-optimization, unifying load routing, parallelism strategy selection, query scheduling, and dynamic GPU frequency scaling. FREESH combines GPU power-throughput characterization, predictable query-length matching, least-laxity-first scheduling, and load-balanced routing to minimize carbon emissions or energy consumption while satisfying SLO latency and fairness constraints. Evaluated on real production workloads, FREESH reduces energy consumption by 28.6% and carbon emissions by 45.45% within one hour, while significantly improving SLA compliance and request-level fairness.

Technology Category

Application Category

πŸ“ Abstract
The ever-increasing computation and energy demand for LLM and AI agents call for holistic and efficient optimization of LLM serving systems. In practice, heterogeneous GPU clusters can be deployed in a geographically distributed manner, while LLM load also observes diversity in terms of both query traffic and serving patterns. LLM queries running on advanced GPUs during a high-emission hour at one location can lead to significantly higher carbon footprints versus same queries running on mid-level GPUs at a low-emission time and location. By observing LLM serving requirements and leveraging spatiotemporal computation flexibility, we consider the joint routing and scheduling problem, and propose FREESH to cooperatively run a group of data centers while minimizing user-specified carbon or energy objectives. FREESH identifies the optimal configurations of balanced load serving by matching distinct GPU instance's power-throughput characteristics with predictable LLM query length and workloads. To ensure both latency and fairness requirements, FREESH identifies optimized parallelism and query routing schedules together with dynamic GPU frequency scaling for power saving, and Least-Laxity-First (LLF) serving strategy for query scheduling. During the 1-hour serving on production workloads, FREESH reduces energy by 28.6% and emissions by 45.45% together with improvements in SLO attainment and fairness.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM serving for carbon and energy efficiency
Scheduling queries across heterogeneous distributed GPU clusters
Balancing latency requirements with fair resource allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages spatiotemporal flexibility for carbon optimization
Matches GPU power characteristics with query workloads
Combines dynamic frequency scaling with LLF scheduling
πŸ”Ž Similar Papers
No similar papers found.
X
Xuan He
Hong Kong University of Science and Technology (Guangzhou), China
Z
Zequan Fang
Huazhong University of Science and Technology, China
J
Jinzhao Lian
Renmin University of China, China
D
Danny H. K. Tsang
Hong Kong University of Science and Technology (Guangzhou), China
Baosen Zhang
Baosen Zhang
Keith and Nancy Rattie Endowed Career Development Professor, University of Washington
Power systemssmart grid
Yize Chen
Yize Chen
Assistant Professor, University of Alberta
Machine LearningPower SystemsOptimizationControl