Generating Data Locality to Accelerate Sparse Matrix-Matrix Multiplication on CPUs

📅 2025-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low cache reuse and poor parallel scalability of sparse general matrix-matrix multiplication (SpGEMM) on CPUs, this paper proposes MAGNUS: an input- and system-aware adaptive two-level tiling and reordering scheme that localizes intermediate products for cache-friendly computation; a novel threshold-driven hybrid accumulation mechanism integrating AVX-512 bitonic sorting with dense accumulation; and dynamic chunk-size optimization coupled with OpenMP multithreading scheduling. On the SuiteSparse benchmark, MAGNUS significantly outperforms mainstream libraries—including Intel MKL—achieving speedups of up to several orders of magnitude. Moreover, it is the first SpGEMM algorithm to attain near-theoretical peak performance on ultra-large-scale random-graph matrices, thereby demonstrating strong scalability and the effectiveness of hardware–algorithm co-design.

Technology Category

Application Category

📝 Abstract
Sparse GEneral Matrix-matrix Multiplication (SpGEMM) is a critical operation in many applications. Current multithreaded implementations are based on Gustavson's algorithm and often perform poorly on large matrices due to limited cache reuse by the accumulators. We present MAGNUS (Matrix Algebra for Gigantic NUmerical Systems), a novel algorithm to maximize data locality in SpGEMM. To generate locality, MAGNUS reorders the intermediate product into discrete cache-friendly chunks using a two-level hierarchical approach. The accumulator is applied to each chunk, where the chunk size is chosen such that the accumulator is cache-efficient. MAGNUS is input- and system-aware: based on the matrix characteristics and target system specifications, the optimal number of chunks is computed by minimizing the storage cost of the necessary data structures. MAGNUS allows for a hybrid accumulation strategy in which each chunk uses a different accumulator based on an input threshold. We consider two accumulators: an AVX-512 vectorized bitonic sorting algorithm and classical dense accumulation. An OpenMP implementation of MAGNUS is compared with several baselines for a variety of different matrices on three Intel x86 architectures. For matrices from the SuiteSparse collection, MAGNUS is faster than all the baselines in most cases and is orders of magnitude faster than Intel MKL for several matrices. For massive random matrices that model social network graphs, MAGNUS scales to the largest matrix sizes, while the baselines fail to do so. Furthermore, MAGNUS is close to the optimal bound for these matrices, regardless of the matrix size, structure, and density.
Problem

Research questions and friction points this paper is trying to address.

Sparse Matrix Multiplication
Multithreading Efficiency
Cache Utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

MAGNUS algorithm
SpGEMM optimization
AVX-512 technology