🤖 AI Summary
This work identifies an expressivity deficiency in Spectral Graph Neural Networks (SGNNs): even on simple graphs (with distinct eigenvalues), SGNNs fail to distinguish non-isomorphic graphs that share identical multiplicities of the largest Laplacian eigenvalue—a fundamental limitation unaddressed in prior spectral expressivity analyses. To resolve this, we introduce spectral multiplicity as a novel axis for expressivity characterization and establish a hierarchy of expressivity grounded in the multiplicity of the largest eigenvalue. We further propose a rotation-equivariant spectral adaptation mechanism, theoretically guaranteeing enhanced expressivity—specifically, completeness—on simple graphs. Our approach integrates graph Laplacian spectral analysis, k-dimensional Weisfeiler–Lehman (k-WL) testing, homomorphism counting, and eigenvector normalization. Experiments on MNIST superpixel classification and ZINC spectral consistency demonstrate significant improvements: +3.2% classification accuracy and 41% reduction in spectral distance, empirically validating both the identified theoretical limitation and the efficacy of our solution.
📝 Abstract
Spectral features are widely incorporated within Graph Neural Networks (GNNs) to improve their expressive power, or their ability to distinguish among non-isomorphic graphs. One popular example is the usage of graph Laplacian eigenvectors for positional encoding in MPNNs and Graph Transformers. The expressive power of such Spectrally-enhanced GNNs (SGNNs) is usually evaluated via the k-WL graph isomorphism test hierarchy and homomorphism counting. Yet, these frameworks align poorly with the graph spectra, yielding limited insight into SGNNs' expressive power. We leverage a well-studied paradigm of classifying graphs by their largest eigenvalue multiplicity to introduce an expressivity hierarchy for SGNNs. We then prove that many SGNNs are incomplete even on graphs with distinct eigenvalues. To mitigate this deficiency, we adapt rotation equivariant neural networks to the graph spectra setting to propose a method to provably improve SGNNs' expressivity on simple spectrum graphs. We empirically verify our theoretical claims via an image classification experiment on the MNIST Superpixel dataset and eigenvector canonicalization on graphs from ZINC.