Self-Supervised Graph Learning via Spectral Bootstrapping and Laplacian-Based Augmentations

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised graph representation learning methods suffer from reliance on negative sampling, manually designed augmentation strategies, and insufficient robustness. To address these issues, this paper proposes LaplaceGNN—a non-contrastive, negative-sampling-free self-supervised Graph Neural Network framework. Its core innovations include: (i) precomputing spectral augmentations based on max-min centrality, explicitly injecting Laplacian matrix signals into node features; and (ii) introducing a spectral-guided adversarial bootstrapping training mechanism that eliminates dependence on handcrafted augmentations and contrastive loss. By unifying spectral graph theory with self-supervised encoding, LaplaceGNN achieves structural-aware representation learning. Extensive experiments on multiple benchmark datasets demonstrate that LaplaceGNN significantly outperforms state-of-the-art self-supervised graph learning methods, while exhibiting linear scalability and enhanced robustness against structural perturbations.

Technology Category

Application Category

📝 Abstract
We present LaplaceGNN, a novel self-supervised graph learning framework that bypasses the need for negative sampling by leveraging spectral bootstrapping techniques. Our method integrates Laplacian-based signals into the learning process, allowing the model to effectively capture rich structural representations without relying on contrastive objectives or handcrafted augmentations. By focusing on positive alignment, LaplaceGNN achieves linear scaling while offering a simpler, more efficient, self-supervised alternative for graph neural networks, applicable across diverse domains. Our contributions are twofold: we precompute spectral augmentations through max-min centrality-guided optimization, enabling rich structural supervision without relying on handcrafted augmentations, then we integrate an adversarial bootstrapped training scheme that further strengthens feature learning and robustness. Our extensive experiments on different benchmark datasets show that LaplaceGNN achieves superior performance compared to state-of-the-art self-supervised graph methods, offering a promising direction for efficiently learning expressive graph representations.
Problem

Research questions and friction points this paper is trying to address.

Self-supervised graph learning without negative sampling
Capturing structural representations without contrastive objectives
Efficient linear scaling for graph neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised graph learning without negative sampling
Laplacian-based augmentations for structural representation
Adversarial bootstrapped training for robust feature learning
🔎 Similar Papers
No similar papers found.
Lorenzo Bini
Lorenzo Bini
PhD Candidate PhD Candidate at the University of Geneva
Graph Representation LearningSelf-Supervised Learning3D GenomicsFlow Matching
S
Stéphane Marchand-Maillet
Department of Computer Science, University of Geneva, Geneva, Switzerland 1227