Efficiently Executing High-throughput Lightweight LLM Inference Applications on Heterogeneous Opportunistic GPU Clusters with Pervasive Context Management

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-performance computing (HPC) clusters struggle to efficiently support the co-execution of lightweight large language models (LLMs) and high-throughput scientific applications, suffering from prolonged static queue wait times and substantial model initialization overhead. Method: This paper introduces *Universal Context Management*, a technique that decouples and persistently retains LLM initialization contexts on GPUs, enabling preemption tolerance and rapid context restoration. Integrated with dynamic resource scheduling and opportunistic GPU utilization, the system is rearchitected for generative AI workloads. Results: Experiments show that end-to-end execution time reduces from 3 hours to 48 minutes (72.1% speedup) under identical GPU resources; further leveraging 32.8% fragmented GPU capacity cuts total latency to just 13 minutes. To our knowledge, this is the first work to deliver low-overhead, highly elastic LLM inference in heterogeneous, opportunistic HPC environments.

Technology Category

Application Category

📝 Abstract
The rise of Generative AI introduces a new class of HPC workloads that integrates lightweight LLMs with traditional high-throughput applications to accelerate scientific discovery. The current design of HPC clusters is inadequate to support this new class however, either incurring long wait times on static batch queues or repeatedly paying expensive LLM startup costs upon resource preemption. To circumvent both the long queues and high startup costs, we propose to "decouple" the LLM initialization context from the actual LLM inferences, and retain the context in GPUs until it is no longer needed, a technique we term "Pervasive Context Management". We transform a fact verification application to enable this technique, allowing it to reduce its execution time by 72.1% (from 3 hours to 48 minutes) using the same amount of GPUs, and scale opportunistically on 32.8% of all GPUs in the cluster and further reduce the execution time to 13 minutes.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM inference execution on heterogeneous GPU clusters
Reducing long queue times and high startup costs
Enabling efficient context management for high-throughput applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupling LLM context from inference execution
Retaining GPU context via pervasive management
Opportunistic scaling across heterogeneous GPU clusters
🔎 Similar Papers
No similar papers found.