Noisy PDE Training Requires Bigger PINNs

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the empirical risk control of Physics-Informed Neural Networks (PINNs) under noisy data, focusing on their approximation capability for partial differential equations (PDEs), such as the Hamilton–Jacobi–Bellman equation. Method: We establish the first theoretical characterization of the minimal network size—specifically, lower bounds on width and depth—required for PINNs to drive empirical risk below the noise variance when supervision labels are corrupted. Our analysis combines rigorous theoretical derivation with extensive experimental validation across both supervised and unsupervised PINN settings. Contribution/Results: We reveal a fundamental limitation: increasing sample size alone cannot guarantee sub-noise empirical risk without sufficient model capacity. Experiments confirm that PINNs meeting the derived parameter lower bound indeed achieve empirical risk below the noise level. This work provides the first quantitative, theoretically grounded guidelines for architecture design and hyperparameter selection of PINNs in noisy regimes.

Technology Category

Application Category

📝 Abstract
Physics-Informed Neural Networks (PINNs) are increasingly used to approximate solutions of partial differential equations (PDEs), especially in high dimensions. In real-world applications, data samples are noisy, so it is important to know when a predictor can still achieve low empirical risk. However, little is known about the conditions under which a PINN can do so effectively. We prove a lower bound on the size of neural networks required for the supervised PINN empirical risk to fall below the variance of noisy supervision labels. Specifically, if a predictor achieves an empirical risk $O(η)$ below $σ^2$ (variance of supervision data), then necessarily $d_Nlog d_Ngtrsim N_s η^2$, where $N_s$ is the number of samples and $d_N$ is the number of trainable parameters of the PINN. A similar constraint applies to the fully unsupervised PINN setting when boundary labels are sampled noisily. Consequently, increasing the number of noisy supervision labels alone does not provide a ``free lunch'' in reducing empirical risk. We also show empirically that PINNs can indeed achieve empirical risks below $σ^2$ under such conditions. As a case study, we investigate PINNs applied to the Hamilton--Jacobi--Bellman (HJB) PDE. Our findings lay the groundwork for quantitatively understanding the parameter requirements for training PINNs in the presence of noise.
Problem

Research questions and friction points this paper is trying to address.

Determine neural network size for low empirical risk in noisy PDEs
Prove lower bound on PINN parameters for noisy supervision labels
Investigate PINN performance on Hamilton-Jacobi-Bellman PDE with noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Larger PINNs needed for noisy PDE training
Prove lower bound on neural network size
Empirical risk below noise variance achievable
🔎 Similar Papers
No similar papers found.
S
Sebastien Andre-Sloan
Department of Computer Science, The University of Manchester
Anirbit Mukherjee
Anirbit Mukherjee
Department of Computer Science, The University of Manchester
Deep Learning TheoryDifferential Equations
M
Matthew Colbrook
Department of Applied Mathematics and Theoretical Physics, The University of Cambridge