🤖 AI Summary
Graph neural networks (GNNs) exhibit poor fairness generalization across diverse sensitive attributes (e.g., race, age), necessitating separate model training per attribute—leading to high computational overhead. Method: This paper proposes FairINV, the first unified fair learning framework for multiple sensitive attributes. FairINV integrates invariant risk minimization (IRM) with causal modeling, enabling counterfactual feature disentanglement conditioned on sensitive attribute partitioning. This design facilitates zero-shot fair transfer across unseen sensitive attributes without retraining. Contribution/Results: Evaluated on multiple real-world graph datasets, FairINV significantly outperforms state-of-the-art fair GNN methods, achieving simultaneous improvements in both predictive accuracy and fairness—measured by both group-level (e.g., demographic parity, equalized odds) and individual-level (e.g., fairness-aware ranking) metrics—while eliminating the need for attribute-specific retraining.
📝 Abstract
Recent studies have highlighted fairness issues in Graph Neural Networks (GNNs), where they produce discriminatory predictions against specific protected groups categorized by sensitive attributes such as race and age. While various efforts to enhance GNN fairness have made significant progress, these approaches are often tailored to specific sensitive attributes. Consequently, they necessitate retraining the model from scratch to accommodate changes in the sensitive attribute requirement, resulting in high computational costs. To gain deeper insights into this issue, we approach the graph fairness problem from a causal modeling perspective, where we identify the confounding effect induced by the sensitive attribute as the underlying reason. Motivated by this observation, we formulate the fairness problem in graphs from an invariant learning perspective, which aims to learn invariant representations across environments. Accordingly, we propose a graph fairness framework based on invariant learning, namely FairINV, which enables the training of fair GNNs to accommodate various sensitive attributes within a single training session. Specifically, FairINV incorporates sensitive attribute partition and trains fair GNNs by eliminating spurious correlations between the label and various sensitive attributes. Experimental results on several real-world datasets demonstrate that FairINV significantly outperforms state-of-the-art fairness approaches, underscoring its effectiveness. Our code is available via: https://github.com/ZzoomD/FairINV/.