Uncertainty Quantification for Physics-Informed Neural Networks with Extended Fiducial Inference

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the unreliability of confidence sets in physics-informed neural networks (PINNs) arising from dependence on subjective prior distributions or hyperparameters (e.g., dropout rates) for uncertainty quantification, this paper proposes the first Extended Fiducial Inference (EFI) framework tailored to PINNs. Our method constructs honest confidence sets solely from observational data—without requiring any prior assumptions or manual hyperparameter tuning. We establish a theoretically grounded EFI framework scalable to large-scale deep models, integrating three key innovations: a narrow-neck hypernetwork architecture, physics-constraint embedding, and observation-error imputation modeling—enabling rigorous, automatic, and robust uncertainty quantification. Experiments demonstrate substantial improvements in the statistical reliability and interpretability of PINNs for scientific computing, providing a principled foundation for trustworthy engineering decision-making.

Technology Category

Application Category

📝 Abstract
Uncertainty quantification (UQ) in scientific machine learning is increasingly critical as neural networks are widely adopted to tackle complex problems across diverse scientific disciplines. For physics-informed neural networks (PINNs), a prominent model in scientific machine learning, uncertainty is typically quantified using Bayesian or dropout methods. However, both approaches suffer from a fundamental limitation: the prior distribution or dropout rate required to construct honest confidence sets cannot be determined without additional information. In this paper, we propose a novel method within the framework of extended fiducial inference (EFI) to provide rigorous uncertainty quantification for PINNs. The proposed method leverages a narrow-neck hyper-network to learn the parameters of the PINN and quantify their uncertainty based on imputed random errors in the observations. This approach overcomes the limitations of Bayesian and dropout methods, enabling the construction of honest confidence sets based solely on observed data. This advancement represents a significant breakthrough for PINNs, greatly enhancing their reliability, interpretability, and applicability to real-world scientific and engineering challenges. Moreover, it establishes a new theoretical framework for EFI, extending its application to large-scale models, eliminating the need for sparse hyper-networks, and significantly improving the automaticity and robustness of statistical inference.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in physics-informed neural networks
Overcome limitations of Bayesian and dropout methods
Provide honest confidence sets using observed data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extended Fiducial Inference for PINNs
Narrow-neck hyper-network learns parameters
Honest confidence sets from observed data
🔎 Similar Papers
No similar papers found.