🤖 AI Summary
Existing self-supervised graph representation learning methods suffer from reliance on negative sampling, manually designed augmentation strategies, and insufficient robustness. To address these issues, this paper proposes LaplaceGNN—a non-contrastive, negative-sampling-free self-supervised Graph Neural Network framework. Its core innovations include: (i) precomputing spectral augmentations based on max-min centrality, explicitly injecting Laplacian matrix signals into node features; and (ii) introducing a spectral-guided adversarial bootstrapping training mechanism that eliminates dependence on handcrafted augmentations and contrastive loss. By unifying spectral graph theory with self-supervised encoding, LaplaceGNN achieves structural-aware representation learning. Extensive experiments on multiple benchmark datasets demonstrate that LaplaceGNN significantly outperforms state-of-the-art self-supervised graph learning methods, while exhibiting linear scalability and enhanced robustness against structural perturbations.
📝 Abstract
We present LaplaceGNN, a novel self-supervised graph learning framework that bypasses the need for negative sampling by leveraging spectral bootstrapping techniques. Our method integrates Laplacian-based signals into the learning process, allowing the model to effectively capture rich structural representations without relying on contrastive objectives or handcrafted augmentations. By focusing on positive alignment, LaplaceGNN achieves linear scaling while offering a simpler, more efficient, self-supervised alternative for graph neural networks, applicable across diverse domains. Our contributions are twofold: we precompute spectral augmentations through max-min centrality-guided optimization, enabling rich structural supervision without relying on handcrafted augmentations, then we integrate an adversarial bootstrapped training scheme that further strengthens feature learning and robustness. Our extensive experiments on different benchmark datasets show that LaplaceGNN achieves superior performance compared to state-of-the-art self-supervised graph methods, offering a promising direction for efficiently learning expressive graph representations.