AutoBalance: An Automatic Balancing Framework for Training Physics-Informed Neural Networks

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In physics-informed neural networks (PINNs), the multi-objective loss—comprising PDE residuals, boundary conditions, and other constraints—exhibits significant curvature mismatches and optimization conflicts, hindering effective balancing by conventional single-optimizer schemes. To address this, we propose a “post-combine” paradigm: each loss term is assigned an independent adaptive optimizer; preconditioned gradients are computed separately and then aggregated, enabling loss decoupling and dynamic weighting. This approach circumvents the limitations of gradient pre-adjustment and avoids preconditioning failure under heterogeneous curvatures, while ensuring modularity and orthogonality with existing techniques. Evaluated on diverse strongly nonlinear PDE benchmarks, our method achieves substantially lower MSE and L∞ errors than state-of-the-art PINN approaches. Moreover, it synergistically enhances the performance of complementary techniques—including Neural Tangent Kernel (NTK)-based weighting and hard constraint enforcement.

Technology Category

Application Category

📝 Abstract
Physics-Informed Neural Networks (PINNs) provide a powerful and general framework for solving Partial Differential Equations (PDEs) by embedding physical laws into loss functions. However, training PINNs is notoriously difficult due to the need to balance multiple loss terms, such as PDE residuals and boundary conditions, which often have conflicting objectives and vastly different curvatures. Existing methods address this issue by manipulating gradients before optimization (a "pre-combine" strategy). We argue that this approach is fundamentally limited, as forcing a single optimizer to process gradients from spectrally heterogeneous loss landscapes disrupts its internal preconditioning. In this work, we introduce AutoBalance, a novel "post-combine" training paradigm. AutoBalance assigns an independent adaptive optimizer to each loss component and aggregates the resulting preconditioned updates afterwards. Extensive experiments on challenging PDE benchmarks show that AutoBalance consistently outperforms existing frameworks, achieving significant reductions in solution error, as measured by both the MSE and $L^{infty}$ norms. Moreover, AutoBalance is orthogonal to and complementary with other popular PINN methodologies, amplifying their effectiveness on demanding benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Balancing conflicting loss terms in PINNs
Addressing spectral heterogeneity in loss landscapes
Improving PDE solution accuracy and convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-combine training paradigm with independent optimizers
Aggregates preconditioned updates from multiple loss components
Orthogonal to existing PINN methodologies for enhanced performance
🔎 Similar Papers
No similar papers found.