🤖 AI Summary
While second-order optimization enhances accuracy in Physics-Informed Neural Networks (PINNs), its high memory footprint and poor scalability hinder practical deployment. Method: We propose PINN Balls—a framework featuring learnable domain decomposition and a sparse-coding-based Mixture of Experts (MoE) architecture for efficient, localized physical field modeling; integrated with Adversarial Adaptive Sampling (AAS) to improve training coverage in challenging regions, and supporting distributed second-order optimization. Contribution/Results: PINN Balls preserves theoretical rigor in PDE solution while substantially reducing memory consumption and improving training scalability and computational efficiency. Experiments across diverse scientific machine learning benchmarks demonstrate state-of-the-art (SOTA) accuracy and overcome the scalability bottleneck that has historically limited large-scale adoption of second-order methods in PINNs.
📝 Abstract
Recent advances in Scientific Machine Learning have shown that second-order methods can enhance the training of Physics-Informed Neural Networks (PINNs), making them a suitable alternative to traditional numerical methods for Partial Differential Equations (PDEs). However, second-order methods induce large memory requirements, making them scale poorly with the model size. In this paper, we define a local Mixture of Experts (MoE) combining the parameter-efficiency of ensemble models and sparse coding to enable the use of second-order training. Our model -- extsc{PINN Balls} -- also features a fully learnable domain decomposition structure, achieved through the use of Adversarial Adaptive Sampling (AAS), which adapts the DD to the PDE and its domain. extsc{PINN Balls} achieves better accuracy than the state-of-the-art in scientific machine learning, while maintaining invaluable scalability properties and drawing from a sound theoretical background.