Uncertainty-Aware Diagnostics for Physics-Informed Machine Learning

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In physics-informed machine learning (PIML), multi-objective optimization—balancing data fidelity and physical consistency—leads to ambiguous model evaluation; models with strong statistical metrics may still suffer from latent failure due to inadequate modeling of epistemic uncertainty. To address this, we propose Physics-Informed Log Evidence (PILE), a unified, uncertainty-aware scoring criterion that jointly incorporates physical constraints and observational data. Grounded in a Gaussian process framework, PILE embeds differential equation priors into the evidence lower bound, enabling principled hyperparameter optimization and kernel prior selection—even predicting kernel suitability for a target PDE *a priori*, without training data. Experiments demonstrate that minimizing PILE robustly guides architectural and regularization choices for neural operators and physics-informed neural networks (PINNs), significantly enhancing model reliability, generalizability, and interpretability.

Technology Category

Application Category

📝 Abstract
Physics-informed machine learning (PIML) integrates prior physical information, often in the form of differential equation constraints, into the process of fitting machine learning models to physical data. Popular PIML approaches, including neural operators, physics-informed neural networks, neural ordinary differential equations, and neural discrete equilibria, are typically fit to objectives that simultaneously include both data and physical constraints. However, the multi-objective nature of this approach creates ambiguity in the measurement of model quality. This is related to a poor understanding of epistemic uncertainty, and it can lead to surprising failure modes, even when existing statistical metrics suggest strong fits. Working within a Gaussian process regression framework, we introduce the Physics-Informed Log Evidence (PILE) score. Bypassing the ambiguities of test losses, the PILE score is a single, uncertainty-aware metric that provides a selection principle for hyperparameters of a PIML model. We show that PILE minimization yields excellent choices for a wide variety of model parameters, including kernel bandwidth, least squares regularization weights, and even kernel function selection. We also show that, even prior to data acquisition, a special 'data-free' case of the PILE score identifies a priori kernel choices that are 'well-adapted' to a given PDE. Beyond the kernel setting, we anticipate that the PILE score can be extended to PIML at large, and we outline approaches to do so.
Problem

Research questions and friction points this paper is trying to address.

Addresses ambiguity in physics-informed machine learning model quality
Introduces uncertainty-aware metric for hyperparameter selection in PIML
Provides selection principle for kernel parameters and regularization weights
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces PILE score for model selection
Uses Gaussian process regression framework
Provides uncertainty-aware hyperparameter optimization
🔎 Similar Papers
No similar papers found.
M
Mara Daniels
Department of Mathematics, Massachusetts Institute of Technology
Liam Hodgkinson
Liam Hodgkinson
University of Melbourne
probabilistic machine learningdeep learning theory
M
Michael W. Mahoney
ICSI , LBNL , and Department of Statistics, University of California at Berkeley