FairGP: A Scalable and Fair Graph Transformer Using Graph Partitioning

📅 2024-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph Transformers (GTs) suffer from prohibitive computational overhead and poor subgroup fairness. This paper proposes the first scalable and fair GT framework based on graph partitioning. First, we theoretically characterize the bias propagation mechanism—where higher-order sensitive features induce bias in lower-order node representations. Then, we jointly design graph partitioning and fairness-aware attention to simultaneously achieve structural compression and bias mitigation. Evaluated on six real-world datasets, our method significantly outperforms state-of-the-art approaches: fairness metrics (Statistical Parity Difference and Equalized Odds Difference) improve by 37% on average; inference memory consumption decreases by 52%; and training speed increases by 2.1×. To the best of our knowledge, this is the first GT framework that concurrently ensures strong group fairness and computational scalability.

Technology Category

Application Category

📝 Abstract
Recent studies have highlighted significant fairness issues in Graph Transformer (GT) models, particularly against subgroups defined by sensitive features. Additionally, GTs are computationally intensive and memory-demanding, limiting their application to large-scale graphs. Our experiments demonstrate that graph partitioning can enhance the fairness of GT models while reducing computational complexity. To understand this improvement, we conducted a theoretical investigation into the root causes of fairness issues in GT models. We found that the sensitive features of higher-order nodes disproportionately influence lower-order nodes, resulting in sensitive feature bias. We propose Fairness-aware scalable GT based on Graph Partitioning (FairGP), which partitions the graph to minimize the negative impact of higher-order nodes. By optimizing attention mechanisms, FairGP mitigates the bias introduced by global attention, thereby enhancing fairness. Extensive empirical evaluations on six real-world datasets validate the superior performance of FairGP in achieving fairness compared to state-of-the-art methods. The codes are available at https://github.com/LuoRenqiang/FairGP.
Problem

Research questions and friction points this paper is trying to address.

Graph Transformers
Resource Consumption
Label Fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

FairGP
Attention Mechanism Improvement
Graph Transformer Fairness
🔎 Similar Papers
No similar papers found.