NeuraLSP: An Efficient and Rigorous Neural Left Singular Subspace Preconditioner for Conjugate Gradient Methods

๐Ÿ“… 2026-01-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the slow convergence of graph neural networkโ€“based preconditioners for sparse linear systems arising from PDE discretizations, a problem often caused by rank inflation due to graph aggregation. The authors propose NeuraLSP, the first neural preconditioner that explicitly incorporates near-nullspace information from the left singular subspace of the system matrix. By applying low-rank compression to preserve essential spectral structure and introducing a spectrally aware loss function with theoretical guarantees, NeuraLSP effectively mitigates rank inflation. The method significantly accelerates the convergence of the conjugate gradient solver, achieving up to a 53% speedup across diverse PDE problems while maintaining both theoretical rigor and empirical robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Numerical techniques for solving partial differential equations (PDEs) are integral for many fields across science and engineering. Such techniques usually involve solving large, sparse linear systems, where preconditioning methods are critical. In recent years, neural methods, particularly graph neural networks (GNNs), have demonstrated their potential through accelerated convergence. Nonetheless, to extract connective structures, existing techniques aggregate discretized system matrices into graphs, and suffer from rank inflation and a suboptimal convergence rate. In this paper, we articulate NeuraLSP, a novel neural preconditioner combined with a novel loss metric that leverages the left singular subspace of the system matrix's near-nullspace vectors. By compressing spectral information into a fixed low-rank operator, our method exhibits both theoretical guarantees and empirical robustness to rank inflation, affording up to a 53% speedup. Besides the theoretical guarantees for our newly-formulated loss function, our comprehensive experimental results across diverse families of PDEs also substantiate the aforementioned theoretical advances.
Problem

Research questions and friction points this paper is trying to address.

preconditioning
rank inflation
convergence rate
sparse linear systems
PDEs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Preconditioner
Left Singular Subspace
Rank Inflation Robustness
Conjugate Gradient
Low-rank Operator
๐Ÿ”Ž Similar Papers
No similar papers found.