BumpNet: A Sparse Neural Network Framework for Learning PDE Solutions

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address model redundancy, poor generalization, and limited adaptivity in numerical PDE solving and operator learning, this paper proposes BumpNet—a sparse neural network framework based on fully trainable sigmoid-type bump basis functions. Methodologically, it parameterizes the sigmoid activation as a learnable bump function with adjustable position, shape, and amplitude; introduces three plug-and-play architectures—Bump-PINNs, Bump-EDNN, and Bump-DeepONet—for unified support of PDE solving, temporal evolution modeling, and operator learning; and integrates mesh-free basis expansion, dynamic pruning-driven *h*-adaptive training, and physics-informed constraints. Experiments on diverse PDE benchmarks demonstrate that BumpNet achieves higher accuracy and faster convergence than state-of-the-art methods. At equivalent accuracy, it reduces parameter count by 40–60%, delivering superior precision, strong generalization, and computational efficiency.

Technology Category

Application Category

📝 Abstract
We introduce BumpNet, a sparse neural network framework for PDE numerical solution and operator learning. BumpNet is based on meshless basis function expansion, in a similar fashion to radial-basis function (RBF) networks. Unlike RBF networks, the basis functions in BumpNet are constructed from ordinary sigmoid activation functions. This enables the efficient use of modern training techniques optimized for such networks. All parameters of the basis functions, including shape, location, and amplitude, are fully trainable. Model parsimony and h-adaptivity are effectively achieved through dynamically pruning basis functions during training. BumpNet is a general framework that can be combined with existing neural architectures for learning PDE solutions: here, we propose Bump-PINNs (BumpNet with physics-informed neural networks) for solving general PDEs; Bump-EDNN (BumpNet with evolutionary deep neural networks) to solve time-evolution PDEs; and Bump-DeepONet (BumpNet with deep operator networks) for PDE operator learning. Bump-PINNs are trained using the same collocation-based approach used by PINNs, Bump-EDNN uses a BumpNet only in the spatial domain and uses EDNNs to advance the solution in time, while Bump-DeepONets employ a BumpNet regression network as the trunk network of a DeepONet. Extensive numerical experiments demonstrate the efficiency and accuracy of the proposed architecture.
Problem

Research questions and friction points this paper is trying to address.

Develops a sparse neural network framework for solving PDEs
Enables adaptive basis function pruning during training
Integrates with existing architectures for various PDE learning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meshless basis functions from sigmoid activations
Dynamic pruning for model parsimony and adaptivity
Combines with PINNs, EDNNs, DeepONets for PDEs
🔎 Similar Papers
No similar papers found.
S
Shao-Ting Chiu
Department of Electrical and Computer Engineering, Texas A&M University, TX, USA
I
Ioannis G. Kevrekidis
Department of Chemical and Biomolecular Engineering, The Johns Hopkins University, MD, USA
Ulisses Braga-Neto
Ulisses Braga-Neto
Professor of Electrical and Computer Engineering, Texas A&M University
Machine LearningScientific ComputationSignal and Image ProcessingComputational Biology