🤖 AI Summary
This work addresses elliptic interface problems whose solutions and derivatives exhibit discontinuities across interfaces. We propose Multi-TransNet, a physics-informed neural network framework based on non-overlapping domain decomposition, integrating both strong and weak coupling formulations of interface conditions with transferable neural networks (TransNets). Key innovations include: (i) a subdomain-adaptive neuron allocation strategy; (ii) a global uniform distribution preservation mechanism; (iii) an empirical formula linking neuron shape, coverage radius, and count; and (iv) adaptive normalization of loss-weighting parameters. Comprehensive evaluation on 2D and 3D multi-interface problems—including cases with large diffusion coefficient contrasts—demonstrates that Multi-TransNet achieves superior accuracy, computational efficiency, and robustness compared to state-of-the-art methods, while substantially reducing hyperparameter tuning effort.
📝 Abstract
The transferable neural network (TransNet) is a two-layer shallow neural network with pre-determined and uniformly distributed neurons in the hidden layer, and the least-squares solvers can be particularly used to compute the parameters of its output layer when applied to the solution of partial differential equations. In this paper, we integrate the TransNet technique with the nonoverlapping domain decomposition and the interface conditions to develop a novel multiple transferable neural network (Multi-TransNet) method for solving elliptic interface problems, which typically contain discontinuities in both solutions and their derivatives across interfaces. We first propose an empirical formula for the TransNet to characterize the relationship between the radius of the domain-covering ball, the number of hidden-layer neurons, and the optimal neuron shape. In the Multi-TransNet method, we assign each subdomain one distinct TransNet with an adaptively determined number of hidden-layer neurons to maintain the globally uniform neuron distribution across the entire computational domain, and then unite all the subdomain TransNets together by incorporating the interface condition terms into the loss function. The empirical formula is also extended to the Multi-TransNet and further employed to estimate appropriate neuron shapes for the subdomain TransNets, greatly reducing the parameter tuning cost. Additionally, we propose a normalization approach to adaptively select the weighting parameters for the terms in the loss function. Ablation studies and extensive experiments with comparison tests on different types of elliptic interface problems with low to high contrast diffusion coefficients in two and three dimensions are carried out to numerically demonstrate the superior accuracy, efficiency, and robustness of the proposed Multi-TransNet method.