🤖 AI Summary
Graph Neural Networks (GNNs) solving partial differential equations (PDEs) suffer from limited information propagation due to insufficient message-passing iterations, hindering accurate physical modeling.
Method: We establish the first theoretical lower bounds on the minimum number of message-passing steps required for hyperbolic, parabolic, and elliptic PDEs. By jointly modeling physical constraints (e.g., the CFL condition), spatiotemporal discretization parameters, and GNN architecture, we derive a quantitative relationship between iteration count and effective information propagation range.
Contribution/Results: Our key innovation is revealing—through a physics-driven lens—the fundamental under-propagation inherent in empirically chosen iteration counts, and providing analytically tractable, tight lower bounds. Experiments across four canonical PDEs demonstrate that GNNs achieving this bound accurately capture underlying physics and yield high-accuracy solutions, whereas performance degrades significantly below it. This work reduces reliance on heuristic hyperparameter tuning and provides foundational theoretical guidance for designing physics-informed GNNs.
📝 Abstract
This paper proposes sharp lower bounds for the number of message passing iterations required in graph neural networks (GNNs) when solving partial differential equations (PDE). This significantly reduces the need for exhaustive hyperparameter tuning. Bounds are derived for the three fundamental classes of PDEs (hyperbolic, parabolic and elliptic) by relating the physical characteristics of the problem in question to the message-passing requirement of GNNs. In particular, we investigate the relationship between the physical constants of the equations governing the problem, the spatial and temporal discretisation and the message passing mechanisms in GNNs.
When the number of message passing iterations is below these proposed limits, information does not propagate efficiently through the network, resulting in poor solutions, even for deep GNN architectures. In contrast, when the suggested lower bound is satisfied, the GNN parameterisation allows the model to accurately capture the underlying phenomenology, resulting in solvers of adequate accuracy.
Examples are provided for four different examples of equations that show the sharpness of the proposed lower bounds.