Guaranteeing Semantic and Performance Determinism in Flexible GPU Sharing

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing GPU sharing approaches, which either incur severe tail latency spikes for interactive services under coarse-grained time-division multiplexing or require invasive kernel modifications that compromise behavioral consistency in fine-grained spatial sharing. To overcome these challenges, the authors propose DetShare, a system that introduces a GPU coroutine abstraction to decouple logical execution contexts from physical resources, enabling transparent, fine-grained, and predictable sharing without application code modifications. DetShare is the first to simultaneously guarantee semantic determinism (result consistency) and performance determinism (predictable tail latency), leveraging lightweight context migration, workload-aware placement, and a TPOT-First scheduling policy. Experimental results demonstrate up to 79.2% higher training throughput, 15.1% lower P99 tail latency, 69.1% reduced average inference latency, and a 21.2% decrease in TPOT SLO violations.

Technology Category

Application Category

📝 Abstract
GPU sharing is critical for maximizing hardware utilization in modern data centers. However, existing approaches present a stark trade-off: coarse-grained temporal multiplexing incurs severe tail-latency spikes for interactive services, while fine-grained spatial partitioning often necessitates invasive kernel modifications that compromise behavioral equivalence. We present DetShare, a novel GPU sharing system that prioritizes determinism and transparency. DetShare ensures semantic determinism (unmodified kernels yield identical results) and performance determinism (predictable tail latency), all while maintaining complete transparency (zero code modification). DetShare introduces GPU coroutines, a new abstraction that decouples logical execution contexts from physical GPU resources. This decoupling enables flexible, fine-grained resource allocation via lightweight context migration. Our evaluation demonstrates that DetShare improves training throughput by up to 79.2% compared to temporal sharing. In co-location scenarios, it outperforms state-of-the-art baselines, reducing P99 tail latency by 15.1% without compromising throughput. Furthermore, through workload-aware placement and our TPOT-First scheduling policy, DetShare decreases average inference latency by 69.1% and reduces Time-Per-Output-Token (TPOT) SLO violations by 21.2% relative to default policies.
Problem

Research questions and friction points this paper is trying to address.

GPU sharing
semantic determinism
performance determinism
tail latency
behavioral equivalence
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU sharing
determinism
GPU coroutines
fine-grained resource allocation
tail latency
🔎 Similar Papers
2024-10-09International Conference on Architectural Support for Programming Languages and Operating SystemsCitations: 0
Z
Zhenyuan Yang
Key Laboratory of System Software (Chinese Academy of Sciences); Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Wenxin Zheng
Wenxin Zheng
Shanghai Jiao Tong University
Operating SystemsMachine Learning
M
Mingyu Li
Key Laboratory of System Software (Chinese Academy of Sciences); Institute of Software, Chinese Academy of Sciences