DGNet: Discrete Green Networks for Data-Efficient Learning of Spatiotemporal PDEs

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor generalization of existing neural PDE solvers under scarce training data, particularly when encountering unseen source terms—a limitation largely attributed to the absence of explicit modeling of PDE structural priors. To overcome this, the authors propose a hybrid physics-informed neural architecture that integrates Green’s function theory with the superposition principle. By leveraging graph-based discretization to explicitly embed the structural inductive bias inherent in Green’s functions, and exploiting superposition to efficiently respond to arbitrary source terms, the method achieves state-of-the-art accuracy across diverse spatiotemporal PDE tasks using only dozens of training trajectories. Notably, it demonstrates strong zero-shot generalization capabilities to previously unobserved source configurations.

Technology Category

Application Category

📝 Abstract
Spatiotemporal partial differential equations (PDEs) underpin a wide range of scientific and engineering applications. Neural PDE solvers offer a promising alternative to classical numerical methods. However, existing approaches typically require large numbers of training trajectories, while high-fidelity PDE data are expensive to generate. Under limited data, their performance degrades substantially, highlighting their low data efficiency. A key reason is that PDE dynamics embody strong structural inductive biases that are not explicitly encoded in neural architectures, forcing models to learn fundamental physical structure from data. A particularly salient manifestation of this inefficiency is poor generalization to unseen source terms. In this work, we revisit Green's function theory-a cornerstone of PDE theory-as a principled source of structural inductive bias for PDE learning. Based on this insight, we propose DGNet, a discrete Green network for data-efficient learning of spatiotemporal PDEs. The key idea is to transform the Green's function into a graph-based discrete formulation, and embed the superposition principle into the hybrid physics-neural architecture, which reduces the burden of learning physical priors from data, thereby improving sample efficiency. Across diverse spatiotemporal PDE scenarios, DGNet consistently achieves state-of-the-art accuracy using only tens of training trajectories. Moreover, it exhibits robust zero-shot generalization to unseen source terms, serving as a stress test that highlights its data-efficient structural design.
Problem

Research questions and friction points this paper is trying to address.

spatiotemporal PDEs
data efficiency
Green's function
inductive bias
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Green's function
data-efficient learning
spatiotemporal PDEs
inductive bias
graph-based discretization
🔎 Similar Papers
No similar papers found.