Virgo: Cluster-level Matrix Unit Integration in GPUs for Scalability and Energy Efficiency

📅 2024-08-22
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modern GPU tensor cores face scalability and energy-efficiency bottlenecks due to stringent constraints imposed by SIMT core register file capacity and bandwidth. Method: This paper proposes a cluster-level dedicated matrix unit microarchitecture that physically decouples matrix computation from scalar execution at the SIMT core cluster level, thereby increasing operational granularity and offloading register file access pressure from operators and accumulators—enabling their concurrent execution. The design employs a unified hardware architecture with RTL-level implementation, integrated with DNN operator fusion and optimized mapping. Contribution/Results: Experimental evaluation demonstrates 67.3% and 24.2% reductions in active chip power consumption over Ampere and Hopper baselines, respectively, significantly advancing the energy efficiency and scalable deployment of matrix compute units.

Technology Category

Application Category

📝 Abstract
Modern GPUs incorporate specialized matrix units such as Tensor Cores to accelerate GEMM operations, which are central to deep learning workloads. However, existing matrix unit designs are tightly coupled to the SIMT core, restricting operation size due to register file capacity and bandwidth constraints. Such a limitation in scalability makes it difficult to simultaneously improve compute throughput and energy efficiency in GPUs. To address this challenge, we propose Virgo, a GPU microarchitecture that integrates dedicated matrix units at the SIMT core cluster level. By decoupling the matrix unit from the SIMT core, Virgo eliminates scalability constraints imposed by the core microarchitecture. Consequently, Virgo increases operation granularity at the hardware level, reducing energy overhead from core instruction processing. Physical disaggregation also enables a unified matrix unit design and offloading both operand and accumulator accesses from the register file, improving data reuse and energy efficiency. Furthermore, this disaggregation supports efficient concurrent execution of the SIMT core and matrix unit, optimizing mapping for fused DNN workloads. Our evaluations using synthesizable RTL demonstrate that Virgo achieves 67.3% and 24.2% reduction in on-chip active power consumption, compared to the baseline Ampere-style and Hopper-style core-coupled designs.
Problem

Research questions and friction points this paper is trying to address.

Addresses scalability and energy efficiency in GPU matrix units.
Decouples matrix units from SIMT cores to improve operation granularity.
Enhances data reuse and concurrent execution for deep learning workloads.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples matrix units from SIMT cores
Enhances operation granularity and energy efficiency
Supports concurrent SIMT and matrix unit execution
🔎 Similar Papers
No similar papers found.
Hansung Kim
Hansung Kim
Associate Professor, University of Southampton
Computer Vision3D reconstructionSpherical imaging3D feature descriptor
R
Ruohan Yan
University of California, Berkeley
J
Joshua You
University of California, Berkeley
T
Tieliang Vamber Yang
University of California, Berkeley
Y
Y. Shao
University of California, Berkeley