🤖 AI Summary
This work addresses the challenge of efficiently executing floating-point matrix multiplication (GEMM) on integer-tensor-accelerated hardware (e.g., NVIDIA A100/H100). We propose a precision-controllable integer tiling reconstruction method that decomposes floating-point GEMM into multiple exact integer matrix multiplications followed by floating-point accumulation. Our key contributions are: (i) the first lightweight error propagation model enabling precision-driven, adaptive estimation of the required number of integer tiles; and (ii) a systematic analysis revealing the dual impact of row/column scaling imbalance on both numerical accuracy and computational throughput. Experiments validate our theoretical error bounds and demonstrate accurate identification of precision failure boundaries under realistic imbalance scenarios. The method achieves floating-point-level accuracy while significantly improving integer hardware utilization, enabling explicit, tunable trade-offs between performance and precision.
📝 Abstract
Ootomo, Ozaki, and Yokota [Int. J. High Perform. Comput. Appl., 38 (2024), p. 297-313] have proposed a strategy to recast a floating-point matrix multiplication in terms of integer matrix products. The factors A and B are split into integer slices, the product of these slices is computed exactly, and AB is approximated by accumulating these integer products in floating-point arithmetic. This technique is particularly well suited to mixed-precision matrix multiply-accumulate units with integer support, such as the NVIDIA tensor cores or the AMD matrix cores. The number of slices allows for performance-accuracy tradeoffs: more slices yield better accuracy but require more multiplications, which in turn reduce performance. We propose an inexpensive way to estimate the minimum number of multiplications needed to achieve a prescribed level of accuracy. Our error analysis shows that the algorithm may become inaccurate (or inefficient) if rows of A or columns of B are badly scaled. We perform a range of numerical experiments, both in simulation and on the latest NVIDIA GPUs, that confirm the analysis and illustrate strengths and weaknesses of the algorithm.