H2SGEMM: Emulating FP32 GEMM on Ascend NPUs using FP16 Units with Precision Recovery and Cache-Aware Optimization

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inability of AI accelerators such as Ascend—equipped solely with FP16 compute units—to natively execute FP32 general matrix multiplication (GEMM), this paper proposes a high-accuracy, high-performance FP32 GEMM emulation method. The approach comprises three key innovations: (1) a tunable-scaling-based FP32-to-FP16 decomposition with explicit error compensation; (2) term-wise accumulation, which significantly improves numerical stability in low-exponent regimes; and (3) cache-aware tiling coupled with double-buffered pipelining to enable computation–communication overlap and efficient hardware resource utilization. Evaluated on the Ascend 910A NPU, the method achieves 77% of the theoretical FP32-equivalent peak performance while matching native FP32 GEMM accuracy and, in certain scenarios, demonstrating superior numerical robustness.

Technology Category

Application Category

📝 Abstract
Low-precision matrix engines, such as FP16 cube, offer high throughput but lack support for full-precision computation. In this work, we propose H2SGEMM, a high-performance algorithm for emulating FP32 general matrix-matrix multiplication (GEMM) using only FP16 computation units on a representative AI accelerator. The method decomposes each FP32 operand into two FP16 values and compensates for numerical errors through a tunable scaling strategy. A detailed analysis of numerical errors, including underflow conditions and precision loss, guides the selection of scaling parameters to preserve up to 22 bits of mantissa accuracy. We further investigate the effect of computation order on accuracy and demonstrate that a term-wise accumulation scheme improves numerical stability over conventional FP32 GEMM in low-exponent regimes. Finally, a cache-aware blocking strategy and double-buffered pipeline are introduced to overlap memory transfers with computation, enabling H2SGEMM to achieve up to 77% of the theoretical FP32-equivalent peak performance on Ascend 910A NPU lacking native FP32 support. Extensive numerical experiments confirm that our method not only recovers the accuracy of native FP32 GEMM but also exhibits superior numerical stability under certain conditions, due to its structured and error-aware computation order.
Problem

Research questions and friction points this paper is trying to address.

Emulating FP32 GEMM using FP16 units with precision recovery
Optimizing numerical accuracy via tunable scaling and error analysis
Enhancing performance with cache-aware blocking and pipeline strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

FP32 emulation via FP16 decomposition
Tunable scaling for precision recovery
Cache-aware blocking for performance optimization
🔎 Similar Papers
No similar papers found.
W
Weicheng Xue
B
Baisong Xu
K
Kai Yang
Yongxiang Liu
Yongxiang Liu
Professor, National University of Defense Technology
Remote SensingSynthetic Aperture RadarRadarImage ProcessingPattern Recognition
D
Dengdeng Fan
P
Pengxiang Xu
Y
Yonghong Tian