🤖 AI Summary
Existing physics-informed neural networks (PINNs) suffer from spectral bias inherent in multilayer perceptrons (MLPs), limiting their ability to accurately approximate high-frequency and strongly nonlinear PDE solutions. While fixed-parameterized grid methods alleviate inductive bias, they rely on high-resolution discretizations and dense collocation points, compromising flexibility. This work proposes Physics-Informed Gaussians (PIG): a mesh-free, adaptive feature embedding that replaces rigid grid parameters with learnable Gaussian functions—each characterized by trainable means and variances. Integrated into the PINN framework, PIG jointly optimizes Gaussian locations, shapes, and lightweight network weights via end-to-end differentiable training—the first such approach for Gaussian kernel parameters. Consequently, PIG eliminates dependence on grid resolution and fixed spatial configurations, mitigates spectral bias, enhances high-frequency representation capability, and improves training stability. Across diverse PDE benchmarks, PIG consistently outperforms standard PINNs and grid-based alternatives in both accuracy and generalization.
📝 Abstract
The numerical approximation of partial differential equations (PDEs) using neural networks has seen significant advancements through Physics-Informed Neural Networks (PINNs). Despite their straightforward optimization framework and flexibility in implementing various PDEs, PINNs often suffer from limited accuracy due to the spectral bias of Multi-Layer Perceptrons (MLPs), which struggle to effectively learn high-frequency and nonlinear components. Recently, parametric mesh representations in combination with neural networks have been investigated as a promising approach to eliminate the inductive bias of MLPs. However, they usually require high-resolution grids and a large number of collocation points to achieve high accuracy while avoiding overfitting. In addition, the fixed positions of the mesh parameters restrict their flexibility, making accurate approximation of complex PDEs challenging. To overcome these limitations, we propose Physics-Informed Gaussians (PIGs), which combine feature embeddings using Gaussian functions with a lightweight neural network. Our approach uses trainable parameters for the mean and variance of each Gaussian, allowing for dynamic adjustment of their positions and shapes during training. This adaptability enables our model to optimally approximate PDE solutions, unlike models with fixed parameter positions. Furthermore, the proposed approach maintains the same optimization framework used in PINNs, allowing us to benefit from their excellent properties. Experimental results show the competitive performance of our model across various PDEs, demonstrating its potential as a robust tool for solving complex PDEs. Our project page is available at https://namgyukang.github.io/Physics-Informed-Gaussians/