🤖 AI Summary
Accurate estimation of the condition number of sparse matrices is computationally expensive. This work proposes an efficient prediction method based on graph neural networks (GNNs) that enables rapid modeling of both the 1-norm and 2-norm condition numbers. By designing a feature engineering pipeline with linear complexity—O(nnz + n), where nnz denotes the number of nonzeros and n the matrix dimension—the approach supports two distinct prediction strategies. It achieves high accuracy while significantly outperforming classical methods such as the Hager–Higham and Lanczos algorithms. Extensive evaluation on multiple benchmark datasets demonstrates speedups of up to an order of magnitude, offering a scalable solution for condition number estimation in large-scale sparse systems.
📝 Abstract
In this paper, we propose a fast method for estimating the condition number of sparse matrices using graph neural networks (GNNs). To enable efficient training and inference of GNNs, our proposed feature engineering for GNNs achieves $\mathrm{O}(\mathrm{nnz} + n)$, where $\mathrm{nnz}$ is the number of non-zero elements in the matrix and $n$ denotes the matrix dimension. We propose two prediction schemes for estimating the matrix condition number using GNNs. The extensive experiments for the two schemes are conducted for 1-norm and 2-norm condition number estimation, which show that our method achieves a significant speedup over the Hager-Higham and Lanczos methods.