Tally: Non-Intrusive Performance Isolation for Concurrent Deep Learning Workloads

πŸ“… 2024-10-09
πŸ›οΈ International Conference on Architectural Support for Programming Languages and Operating Systems
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Low GPU utilization in production deep learning clusters causes job queue buildup and increased operational costs. Existing GPU sharing solutions suffer from high integration overhead, weak performance isolation, and poor application compatibility. This paper proposes the first thread-block–level fine-grained GPU kernel scheduling framework, enabling strongly isolated concurrent execution of inference and training workloads on a single GPU without modifying frameworks or kernels. Our approach employs a non-intrusive, lightweight runtime built upon CUDA streams and hardware schedulers. We further introduce a priority-aware block-level resource arbitration policy to guarantee latency-sensitive inference tasks. Experimental results show that, compared to the state-of-the-art TGS system, our framework reduces P99 latency overhead for high-priority inference tasks from 188.9% to just 7.2%, while maintaining over 80% of baseline system throughput.

Technology Category

Application Category

πŸ“ Abstract
GPU underutilization is a significant concern in many production deep learning clusters, leading to prolonged job queues and increased operational expenses. A promising solution to this inefficiency is GPU sharing, which improves resource utilization by allowing multiple workloads to execute concurrently on a single GPU. However, deploying GPU sharing in production settings faces critical obstacles due to the limitations of existing mechanisms, including high integration costs, inadequate performance isolation, and limited application compatibility. To address these issues, we introduce emph{Tally}, a non-intrusive GPU sharing mechanism that provides robust performance isolation and comprehensive workload compatibility. The key to Tally's robust performance isolation capability lies in its fine-grained thread-block-level GPU kernel scheduling strategy, which allows the system to effectively mitigate interference caused by workload co-execution. We evaluate Tally on a diverse range of workloads and show that it incurs an average overhead of only $7.2%$ on the $99^{th}$-percentile latency of high-priority inference tasks when executed concurrently with best-effort training workloads, compared to $188.9%$ overhead exhibited by the state-of-the-art GPU sharing systems like TGS, while achieving over $80%$ of TGS's system throughput.
Problem

Research questions and friction points this paper is trying to address.

Addresses GPU underutilization in deep learning clusters
Improves GPU sharing with robust performance isolation
Reduces overhead and enhances workload compatibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-intrusive GPU sharing mechanism
Fine-grained thread-block-level scheduling
Robust performance isolation capability
W
Wei Zhao
Stanford University, CentML, USA
A
Anand Jayarajan
University of Toronto, Vector Institute, CentML, Canada
Gennady Pekhimenko
Gennady Pekhimenko
University of Toronto
Computer ArchitectureSystemsSystems for MLMachine Learning