SPGCL: Simple yet Powerful Graph Contrastive Learning via SVD-Guided Structural Perturbation

πŸ“… 2026-01-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the sensitivity of graph neural networks (GNNs) to structural noise and the challenge faced by existing graph contrastive learning methods in simultaneously preserving structural priors and ensuring perturbation diversity. To this end, we propose SPGCL, a novel framework that uniquely integrates discrete stochastic edge dropping with continuous truncated singular value decomposition (SVD) to construct structure-aware contrastive views. Specifically, lightweight edge dropping introduces diversity, while SVD recovers critical edges and supplements semantically missing connections. A contrastive fusion module, incorporating a sparse Top-P recovery mechanism and global similarity constraints, enables controlled and robust structural perturbations. Extensive experiments on ten benchmark datasets demonstrate that SPGCL significantly improves both accuracy and noise robustness of GNNs, outperforming state-of-the-art graph contrastive learning and structure learning approaches.

Technology Category

Application Category

πŸ“ Abstract
Graph Neural Networks (GNNs) are sensitive to structural noise from adversarial attacks or imperfections. Existing graph contrastive learning (GCL) methods typically rely on either random perturbations (e.g., edge dropping) for diversity or spectral augmentations (e.g., SVD) to preserve structural priors. However, random perturbations are structure-agnostic and may remove critical edges, while SVD-based views often lack sufficient diversity. Integrating these paradigms is challenging as they operate on discrete edge removal and continuous matrix factorization, respectively.We propose SPGCL, a framework for robust GCL via SVD-guided structural perturbation. Leveraging a recently developed SVD-based method that generalizes structural perturbation theory to arbitrary graphs, we design a two-stage strategy: (1) lightweight stochastic edge removal to inject diversity, and (2) truncated SVD to derive a structure-aware scoring matrix for sparse top-$P$ edge recovery. This integration offers three advantages: (1) Robustness to accidental deletion, as important edges can be recovered by SVD-guided scoring; (2) Enrichment with missing links, creating more informative contrastive views by introducing semantically meaningful edges; and (3) Controllable structural discrepancy, ensuring contrastive signals stem from semantic differences rather than edge-number gaps.Furthermore, we incorporate a contrastive fusion module with a global similarity constraint to align embeddings. Extensive experiments on ten benchmark datasets demonstrate that SPGCL consistently improves the robustness and accuracy of GNNs, outperforming state-of-the-art GCL and structure learning methods, validating its effectiveness in integrating previously disparate paradigms.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Structural Noise
Graph Contrastive Learning
Data Augmentation
Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Contrastive Learning
SVD-Guided Perturbation
Structural Robustness
Edge Recovery
Contrastive Fusion
πŸ”Ž Similar Papers
No similar papers found.
Hao Deng
Hao Deng
Engineer
recommendation system
Z
Zhang Guo
School of Artificial Intelligence, Xidian University, Xi’an, Shaanxi, 710126, China
S
Shuiping Gou
School of Artificial Intelligence, Xidian University, Xi’an, Shaanxi, 710126, China
B
Bo Liu
School of Artificial Intelligence, Xidian University, Xi’an, Shaanxi, 710126, China