🤖 AI Summary
To address resource fragmentation and low utilization caused by NVIDIA MIG GPU virtual machine placement in cloud data centers, this paper proposes GRMU, a multi-stage optimization framework. First, it formulates a multi-objective integer linear programming model to jointly optimize request admission rate, number of active GPUs, and migration overhead. Second, it introduces intra-GPU fragmentation defragmentation and inter-GPU resource consolidation mechanisms. Third, it innovatively designs a dual-basket quota partitioning strategy to ensure fair co-location of heterogeneous workloads (small and large) with strong resource isolation. Evaluated on real-world GPU cluster traces from Alibaba Cloud, GRMU achieves a 22% higher request admission rate, reduces the number of active GPUs by 17%, and incurs migration for only 1% of MIG-VMs—significantly mitigating fragmentation and improving overall GPU utilization compared to baseline approaches.
📝 Abstract
The extensive use of GPUs in cloud computing and the growing need for multitenancy have driven the development of innovative solutions for efficient GPU resource management. Multi-Instance GPU (MIG) technology from NVIDIA enables shared GPU usage in cloud data centers by providing isolated instances. However, MIG placement rules often lead to fragmentation and suboptimal resource utilization. In this work, we formally model the MIG-enabled VM placement as a multi-objective Integer Linear Programming (ILP) problem aimed at maximizing request acceptance, minimizing active hardware usage, and reducing migration overhead. Building upon this formulation, we propose GRMU, a multi-stage placement framework designed to address MIG placement challenges. GRMU performs intra-GPU migrations for defragmentation of a single GPU and inter-GPU migrations for consolidation and resource efficiency. It also employs a quota-based partitioning approach to allocate GPUs into two distinct baskets: one for large-profile workloads and another for smaller-profile workloads. Each basket has predefined capacity limits, ensuring fair resource distribution and preventing large-profile workloads from monopolizing resources. Evaluations on a real-world Alibaba GPU cluster trace reveal that GRMU improves acceptance rates by 22%, reduces active hardware by 17%, and incurs migration for only 1% of MIG-enabled VMs, demonstrating its effectiveness in minimizing fragmentation and improving resource utilization.