Conditional Distribution Learning on Graphs

📅 2024-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In semi-supervised graph classification, a fundamental tension exists between the node homophily induced by graph neural networks (GNNs) and the heterophily required for effective contrastive learning. Method: To resolve this, we propose Conditional Distribution Alignment (CDA), a novel paradigm that abandons negative-sample-dependent contrastive learning and instead performs end-to-end alignment of representations—derived from the original graph, weakly augmented graph, and strongly augmented graph—at the conditional distribution level, thereby enabling robust, semantically faithful graph representation learning. Contribution/Results: CDA is the first to introduce conditional distribution modeling into graph contrastive learning, eliminating the inherent conflict between message passing and negative sampling. By integrating multi-granularity graph data augmentation and positive-pair alignment, it ensures semantic consistency. Extensive experiments on multiple benchmark graph datasets demonstrate significant improvements in semi-supervised classification performance, validating CDA’s dual advantages in semantic preservation and generalization capability.

Technology Category

Application Category

📝 Abstract
Leveraging the diversity and quantity of data provided by various graph-structured data augmentations while preserving intrinsic semantic information is challenging. Additionally, successive layers in graph neural network (GNN) tend to produce more similar node embeddings, while graph contrastive learning aims to increase the dissimilarity between negative pairs of node embeddings. This inevitably results in a conflict between the message-passing mechanism (MPM) of GNNs and the contrastive learning (CL) of negative pairs via intraviews. In this paper, we propose a conditional distribution learning (CDL) method that learns graph representations from graph-structured data for semisupervised graph classification. Specifically, we present an end-to-end graph representation learning model to align the conditional distributions of weakly and strongly augmented features over the original features. This alignment enables the CDL model to effectively preserve intrinsic semantic information when both weak and strong augmentations are applied to graph-structured data. To avoid the conflict between the MPM and the CL of negative pairs, positive pairs of node representations are retained for measuring the similarity between the original features and the corresponding weakly augmented features. Extensive experiments with several benchmark graph datasets demonstrate the effectiveness of the proposed CDL method.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Homophily vs Heterophily
Contrastive Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

CDL
Graph Neural Networks
Semi-supervised Learning
🔎 Similar Papers
No similar papers found.
J
Jie Chen
College of Computer Science, Sichuan University, Chengdu, China
Hua Mao
Hua Mao
Assitant Professor, Northumbria University
Deep LearningMulti-agents
Y
Yuanbiao Gou
College of Computer Science, Sichuan University, Chengdu, China
Z
Zhu Wang
Law School, Sichuan University, Chengdu, China
X
Xi Peng
College of Computer Science, Sichuan University, Chengdu, China