🤖 AI Summary
Existing tensor product graph partial differential equation (TPDEG)-based continuous modeling approaches support only first-order derivatives, leading to high-frequency signal suppression, slow information propagation, and inadequate characterization of multi-scale and heterogeneous structures.
Method: We propose second-order tensor PDEs on graphs (So-TPDEGs), establishing the first theoretical framework for second-order continuous product graph neural networks. Leveraging the separability of the cosine kernel, we achieve efficient spectral decomposition while explicitly preserving high-frequency spectral components. We further provide rigorous stability analysis under graph perturbations and characterize the oversmoothing mechanism.
Contribution/Results: By integrating spectral graph theory with the separability of product graph structures, So-TPDEGs enable high-order continuous dynamical modeling across multiple domains. Extensive theoretical and empirical evaluations demonstrate that So-TPDEGs significantly enhance representational capacity and robustness against oversmoothing, achieving superior effectiveness and robustness on heterophilic and complex-structured graphs.
📝 Abstract
Processing data that lies on multiple interacting (product) graphs is increasingly important in practical applications, yet existing methods are mostly restricted to discrete graph filtering. Tensorial partial differential equations on graphs (TPDEGs) offer a principled framework for modeling such multidomain data in a continuous setting. However, current continuous approaches are limited to first-order derivatives, which tend to dampen high-frequency signals and slow down information propagation. This makes these TPDEGs-based approaches less effective for capturing complex, multi-scale, and heterophilic structures. In this paper, we introduce second-order TPDEGs (So-TPDEGs) and propose the first theoretically grounded framework for second-order continuous product graph neural networks. Our approach leverages the separability of cosine kernels in Cartesian product graphs to implement efficient spectral decomposition, while naturally preserving high-frequency information. We provide rigorous theoretical analyses of stability under graph perturbations and over-smoothing behavior regarding spectral properties. Our theoretical results establish a robust foundation for advancing continuous graph learning across multiple practical domains.