🤖 AI Summary
Existing CPU hardware struggles to execute unstructured sparse-sparse matrix multiplication (SpGEMM) efficiently, primarily due to redundant zero-value computations and incompatibility with sparse data layouts. This work proposes a lightweight hardware modification—extending mainstream dense GEMM accelerators and systolic arrays with minimal changes to matrix expansion instructions and control logic—while integrating compressed sparse indexing and zero-aware data paths. It is the first design to natively support unstructured SpGEMM. The architecture preserves compatibility with standard sparse formats while enabling fine-grained zero skipping. Evaluated against scalar hash-based and state-of-the-art vectorized SpGEMM implementations, it achieves 5.98× and 2.61× speedups, respectively, with only a 12.7% increase in systolic array area and less than a few percentage points of total system-level area overhead.
📝 Abstract
The importance of general matrix multiplication (GEMM) is motivating new instruction set extensions for multiplying dense matrices in almost all contemporary ISAs, and these extensions are often implemented using high-performance systolic arrays. However, matrices in emerging workloads are not always dense, and sparse matrices where the vast majority of values are zeros are becoming more common. Existing matrix extensions and micro-architectures cannot efficiently process highly sparse matrices due to two reasons: (1) wasted work when one or both input values are zero; and (2) incompatibility with sparse matrix formats. This work proposes SparseZipper that minimally modifies existing matrix extensions and systolic-array-based micro-architectures specialized for dense-dense GEMM to accelerate sparse-sparse GEMM operating on highly sparse matrices with unstructured sparsity structures. Our performance evaluation shows SparseZipper achieves 5.98x and 2.61x speedup over a scalar hash-based implementation of SpGEMM and a state-of-the-art vectorized SpGEMM version, respectively. Our component-level area evaluation shows SparseZipper increases the area of a baseline 16x16 systolic array by only 12.7% resulting in an area overhead for an entire system-on-chip of just a few percent.