Mitigating Degree Bias Adaptively with Hard-to-Learn Nodes in Graph Contrastive Learning

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks (GNNs) suffer from “degree bias” in node classification—low-degree nodes exhibit significantly inferior performance compared to high-degree ones—due to imbalanced degree distributions. Existing graph contrastive learning (GCL) methods are hampered by scarcity of positive samples and uniform sample weighting, limiting their ability to model low-degree nodes effectively. Method: We propose SHARP, a novel framework featuring hardness-adaptive reweighting (HAR), the first loss function that jointly incorporates label-guided positive sample augmentation and dynamic assessment of node learning difficulty. Contribution/Results: We theoretically prove HAR’s capability to mitigate degree bias. Extensive experiments on four benchmark datasets demonstrate that SHARP consistently outperforms state-of-the-art methods in both overall accuracy and accuracy across all degree intervals, validating its effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) often suffer from degree bias in node classification tasks, where prediction performance varies across nodes with different degrees. Several approaches, which adopt Graph Contrastive Learning (GCL), have been proposed to mitigate this bias. However, the limited number of positive pairs and the equal weighting of all positives and negatives in GCL still lead to low-degree nodes acquiring insufficient and noisy information. This paper proposes the Hardness Adaptive Reweighted (HAR) contrastive loss to mitigate degree bias. It adds more positive pairs by leveraging node labels and adaptively weights positive and negative pairs based on their learning hardness. In addition, we develop an experimental framework named SHARP to extend HAR to a broader range of scenarios. Both our theoretical analysis and experiments validate the effectiveness of SHARP. The experimental results across four datasets show that SHARP achieves better performance against baselines at both global and degree levels.
Problem

Research questions and friction points this paper is trying to address.

Mitigating degree bias in Graph Neural Networks
Improving learning for low-degree nodes in GCL
Adaptive weighting of node pairs based on hardness
Innovation

Methods, ideas, or system contributions that make the work stand out.

HAR loss mitigates degree bias adaptively
Adds positive pairs using node labels
SHARP framework extends HAR broadly
🔎 Similar Papers
No similar papers found.