🤖 AI Summary
Graph Neural Networks (GNNs) exhibit limited generalization in numerical method discovery, particularly for high-order finite-volume schemes on unstructured grids.
Method: We propose a novel paradigm integrating GNNs with symbolic regression to autonomously discover finite-volume discretizations—without requiring ground-truth solutions or labeled data. The framework leverages physics-informed neural network (PINN) residual loss to train GNNs that learn discrete differential operators directly on unstructured meshes.
Contribution/Results: We theoretically establish, for the first time, that a GNN trained on only two-node minimal configurations can extrapolate and reconstruct an exact first-order finite-volume scheme with O(ε) error, while automatically discovering second-order correction terms and recovering classical midpoint rules. This shifts GNNs beyond mere function approximators to generative models capable of synthesizing multi-hop, high-order stencils. The approach significantly enhances cross-mesh and multi-scale generalization, offering an interpretable, data-efficient pathway for numerical scheme design in low-data regimes.
📝 Abstract
Graph Neural Networks (GNNs) have deeply modified the landscape of numerical simulations by demonstrating strong capabilities in approximating solutions of physical systems. However, their ability to extrapolate beyond their training domain ( extit{e.g.} larger or structurally different graphs) remains uncertain. In this work, we establish that GNNs can serve purposes beyond their traditional role, and be exploited to generate numerical schemes, in conjunction with symbolic regression. First, we show numerically and theoretically that a GNN trained on a dataset consisting solely of two-node graphs can extrapolate a first-order Finite Volume (FV) scheme for the heat equation on out-of-distribution, unstructured meshes. Specifically, if a GNN achieves a loss $varepsilon$ on such a dataset, it implements the FV scheme with an error of $mathcal{O}(varepsilon)$. Using symbolic regression, we show that the network effectively rediscovers the exact analytical formulation of the standard first-order FV scheme. We then extend this approach to an unsupervised context: the GNN recovers the first-order FV scheme using only a residual loss similar to Physics-Informed Neural Networks (PINNs) with no access to ground-truth data. Finally, we push the methodology further by considering higher-order schemes: we train (i) a 2-hop and (ii) a 2-layers GNN using the same PINN loss, that autonomously discover (i) a second-order correction term to the initial scheme using a 2-hop stencil, and (ii) the classic second-order midpoint scheme. These findings follows a recent paradigm in scientific computing: GNNs are not only strong approximators, but can be active contributors to the development of novel numerical methods.