FP64 is All You Need: Rethinking Failure Modes in Physics-Informed Neural Networks

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physics-informed neural networks (PINNs) frequently suffer from a failure mode wherein the PDE residual converges while the solution error remains unacceptably large—a phenomenon conventionally attributed to local minima. Method: We systematically replace single-precision (FP32) arithmetic with double-precision (FP64) floating-point computation, retaining the standard PINN architecture and loss function unchanged. Contribution/Results: We demonstrate that FP64 eliminates this failure mode entirely. Crucially, we identify that the root cause is insufficient FP32 numerical precision, which causes the L-BFGS optimizer to prematurely satisfy convergence criteria. Moreover, we reveal a three-stage dynamic evolution of training behavior as numerical precision increases. Across multiple canonical PDE benchmarks—including Burgers’, Allen–Cahn, and Navier–Stokes equations—FP64-PINNs achieve zero failures and substantially reduced solution errors. These results establish that standard PINNs possess inherent reliability and robustness for physics modeling when implemented in high-precision arithmetic.

Technology Category

Application Category

📝 Abstract
Physics Informed Neural Networks (PINNs) often exhibit failure modes in which the PDE residual loss converges while the solution error stays large, a phenomenon traditionally blamed on local optima separated from the true solution by steep loss barriers. We challenge this understanding by demonstrate that the real culprit is insufficient arithmetic precision: with standard FP32, the LBFGS optimizer prematurely satisfies its convergence test, freezing the network in a spurious failure phase. Simply upgrading to FP64 rescues optimization, enabling vanilla PINNs to solve PDEs without any failure modes. These results reframe PINN failure modes as precision induced stalls rather than inescapable local minima and expose a three stage training dynamic unconverged, failure, success whose boundaries shift with numerical precision. Our findings emphasize that rigorous arithmetic precision is the key to dependable PDE solving with neural networks.
Problem

Research questions and friction points this paper is trying to address.

PINNs fail due to FP32 precision causing premature optimization convergence
FP64 precision resolves failure modes in Physics-Informed Neural Networks
PINN training dynamics depend on numerical precision, not local minima
Innovation

Methods, ideas, or system contributions that make the work stand out.

Upgrading to FP64 prevents optimization stalls
FP64 enables solving PDEs without failure modes
Precision shifts training dynamics boundaries
🔎 Similar Papers
No similar papers found.