GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning

📅 2024-09-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of label scarcity in graph neural networks (GNNs)—where performance heavily relies on abundant labeled data—and the inability of existing graph contrastive learning (GCL) methods to jointly model local neighborhoods and global topology, this paper proposes a multidimensional contrastive learning framework. We introduce a novel three-network collaborative architecture, incorporating triple-level contrastive losses: cross-network, cross-view, and neighbor-level. The framework integrates SVD-based graph augmentation, LAGNN for localized structural enhancement, and multi-head attention GNNs to achieve synergistic optimization of local and global representations. On Cora, Citeseer, and PubMed, our method achieves average accuracies of 82.5%, 72.5%, and 81.6%, respectively—outperforming state-of-the-art GCL approaches. Visualization confirms tighter intra-cluster and more separable inter-cluster node embeddings. To the best of our knowledge, this is the first work to systematically resolve the trade-off between node-level and graph-level representation learning in GCL.

Technology Category

Application Category

📝 Abstract
Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations, enabling various downstream tasks such as node classification and community detection. However, most current graph neural network models face the challenge of requiring extensive labeled data, which limits their practical applicability in real-world scenarios where labeled data is scarce. To address this challenge, researchers have explored Graph Contrastive Learning (GCL), which leverages enhanced graph data and contrastive learning techniques. While promising, existing GCL methods often struggle with effectively capturing both local and global graph structures, and balancing the trade-off between nodelevel and graph-level representations. In this work, we propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning (GRE2-MDCL). Our model introduces a novel triple network architecture with a multi-head attention GNN as the core. GRE2-MDCL first globally and locally augments the input graph using SVD and LAGNN techniques. It then constructs a multidimensional contrastive loss, incorporating cross-network, cross-view, and neighbor contrast, to optimize the model. Extensive experiments on benchmark datasets Cora, Citeseer, and PubMed demonstrate that GRE2-MDCL achieves state-of-the-art performance, with average accuracies of 82.5%, 72.5%, and 81.6% respectively. Visualizations further show tighter intra-cluster aggregation and clearer inter-cluster boundaries, highlighting the effectiveness of our framework in improving upon baseline GCL models.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited labeled data in graph neural networks.
Improves local and global graph structure capture.
Balances node-level and graph-level representation trade-offs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Triple network architecture with multi-head attention GNN
Multidimensional contrastive loss incorporating cross-network, cross-view, neighbor contrast
Global and local graph augmentation using SVD and LAGNN
🔎 Similar Papers
No similar papers found.
K
Kaizhe Fan
School of Advanced Manufacturing, Guangdong University of Technology, Guangzhou, CHINA
Q
Quanjun Li
School of Advanced Manufacturing, Guangdong University of Technology, Guangzhou, CHINA