Enforcing the Principle of Locality for Physical Simulations with Neural Operators

📅 2024-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural operators often suffer from slow convergence and poor generalization in physical simulations due to violations of the locality principle. To address this, we propose Data-Decomposed Enforced Local Dependency (DDELD), the first method to explicitly embed physical locality into neural operator architectures. DDELD leverages PDE-based locality analysis to decompose input data, rigorously constraining information propagation to align with physical characteristic speeds and time-step constraints. It further reformulates inputs and local receptive fields to be compatible with backbone architectures such as Fourier Neural Operators and Graph Neural Operators. Experiments across diverse physical simulation tasks demonstrate that DDELD accelerates training convergence by up to 3.2× and reduces test error by an average of 37%. The method significantly improves both accuracy and generalization—particularly under low-data regimes and in large-scale engineering simulations—thereby advancing the physical consistency and practical applicability of neural operators.

Technology Category

Application Category

📝 Abstract
Time-dependent partial differential equations (PDEs) for classic physical systems are established based on the conservation of mass, momentum, and energy, which are ubiquitous in scientific and engineering applications. These PDEs are strictly local-dependent according to the principle of locality in physics, which means that the evolution at a point is only influenced by the neighborhood around it whose size is determined by the length of timestep multiplied with the speed of characteristic information traveling in the system. However, deep learning architecture cannot strictly enforce the local-dependency as it inevitably increases the scope of information to make local predictions as the number of layers increases. Under limited training data, the extra irrelevant information results in sluggish convergence and compromised generalizability. This paper aims to solve this problem by proposing a data decomposition method to strictly limit the scope of information for neural operators making local predictions, which is called data decomposition enforcing local-dependency (DDELD). The numerical experiments over multiple physical phenomena show that DDELD significantly accelerates training convergence and reduces test errors of benchmark models on large-scale engineering simulations.
Problem

Research questions and friction points this paper is trying to address.

Deep Learning
Physics Simulation
Information Leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

DDELD
Efficiency Enhancement
Accuracy Improvement
🔎 Similar Papers
2024-02-19Neural Information Processing SystemsCitations: 8
Jiangce Chen
Jiangce Chen
Carnegie Mellon University, Pittsburgh, PA, USA
W
Wenzhuo Xu
Carnegie Mellon University, Pittsburgh, PA, USA
Z
Zeda Xu
Carnegie Mellon University, Pittsburgh, PA, USA
N
Noelia Grande Guti'errez
Carnegie Mellon University, Pittsburgh, PA, USA
S
S. Narra
Carnegie Mellon University, Pittsburgh, PA, USA
Christopher McComb
Christopher McComb
Gerard G. Elia Associate Professor of Mechanical Engineering, Carnegie Mellon University
design scienceengineering designartificial intelligencehuman-machine teaming