๐ค AI Summary
This work addresses the slow convergence of graph neural networkโbased preconditioners for sparse linear systems arising from PDE discretizations, a problem often caused by rank inflation due to graph aggregation. The authors propose NeuraLSP, the first neural preconditioner that explicitly incorporates near-nullspace information from the left singular subspace of the system matrix. By applying low-rank compression to preserve essential spectral structure and introducing a spectrally aware loss function with theoretical guarantees, NeuraLSP effectively mitigates rank inflation. The method significantly accelerates the convergence of the conjugate gradient solver, achieving up to a 53% speedup across diverse PDE problems while maintaining both theoretical rigor and empirical robustness.
๐ Abstract
Numerical techniques for solving partial differential equations (PDEs) are integral for many fields across science and engineering. Such techniques usually involve solving large, sparse linear systems, where preconditioning methods are critical. In recent years, neural methods, particularly graph neural networks (GNNs), have demonstrated their potential through accelerated convergence. Nonetheless, to extract connective structures, existing techniques aggregate discretized system matrices into graphs, and suffer from rank inflation and a suboptimal convergence rate. In this paper, we articulate NeuraLSP, a novel neural preconditioner combined with a novel loss metric that leverages the left singular subspace of the system matrix's near-nullspace vectors. By compressing spectral information into a fixed low-rank operator, our method exhibits both theoretical guarantees and empirical robustness to rank inflation, affording up to a 53% speedup. Besides the theoretical guarantees for our newly-formulated loss function, our comprehensive experimental results across diverse families of PDEs also substantiate the aforementioned theoretical advances.