🤖 AI Summary
Modern GPU tensor cores face scalability and energy-efficiency bottlenecks due to stringent constraints imposed by SIMT core register file capacity and bandwidth. Method: This paper proposes a cluster-level dedicated matrix unit microarchitecture that physically decouples matrix computation from scalar execution at the SIMT core cluster level, thereby increasing operational granularity and offloading register file access pressure from operators and accumulators—enabling their concurrent execution. The design employs a unified hardware architecture with RTL-level implementation, integrated with DNN operator fusion and optimized mapping. Contribution/Results: Experimental evaluation demonstrates 67.3% and 24.2% reductions in active chip power consumption over Ampere and Hopper baselines, respectively, significantly advancing the energy efficiency and scalable deployment of matrix compute units.
📝 Abstract
Modern GPUs incorporate specialized matrix units such as Tensor Cores to accelerate GEMM operations, which are central to deep learning workloads. However, existing matrix unit designs are tightly coupled to the SIMT core, restricting operation size due to register file capacity and bandwidth constraints. Such a limitation in scalability makes it difficult to simultaneously improve compute throughput and energy efficiency in GPUs. To address this challenge, we propose Virgo, a GPU microarchitecture that integrates dedicated matrix units at the SIMT core cluster level. By decoupling the matrix unit from the SIMT core, Virgo eliminates scalability constraints imposed by the core microarchitecture. Consequently, Virgo increases operation granularity at the hardware level, reducing energy overhead from core instruction processing. Physical disaggregation also enables a unified matrix unit design and offloading both operand and accumulator accesses from the register file, improving data reuse and energy efficiency. Furthermore, this disaggregation supports efficient concurrent execution of the SIMT core and matrix unit, optimizing mapping for fused DNN workloads. Our evaluations using synthesizable RTL demonstrate that Virgo achieves 67.3% and 24.2% reduction in on-chip active power consumption, compared to the baseline Ampere-style and Hopper-style core-coupled designs.