Model-Agnostic Fairness Regularization for GNNs with Incomplete Sensitive Information

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fairness-aware graph neural networks (GNNs) commonly assume full observability of sensitive attributes (e.g., race, gender) for all nodes during training—a strong assumption often violated in practice due to privacy constraints and data incompleteness. Method: We propose a model-agnostic, differentiable fairness regularization framework that, for the first time, unifies equal opportunity and statistical parity as jointly optimizable objectives under partial sensitive attribute availability. Leveraging message passing, our method jointly exploits graph topology and node features to dynamically impute missing sensitive attributes and mitigate bias during training. Contribution/Results: Evaluated on five real-world datasets, our approach consistently improves fairness metrics—including ΔEO and ΔSP—while preserving or slightly enhancing classification accuracy. It thus achieves a superior fairness–accuracy trade-off compared to state-of-the-art baselines, demonstrating robustness and practical applicability in realistic, privacy-sensitive settings.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have demonstrated exceptional efficacy in relational learning tasks, including node classification and link prediction. However, their application raises significant fairness concerns, as GNNs can perpetuate and even amplify societal biases against protected groups defined by sensitive attributes such as race or gender. These biases are often inherent in the node features, structural topology, and message-passing mechanisms of the graph itself. A critical limitation of existing fairness-aware GNN methods is their reliance on the strong assumption that sensitive attributes are fully available for all nodes during training--a condition that poses a practical impediment due to privacy concerns and data collection constraints. To address this gap, we propose a novel, model-agnostic fairness regularization framework designed for the realistic scenario where sensitive attributes are only partially available. Our approach formalizes a fairness-aware objective function that integrates both equal opportunity and statistical parity as differentiable regularization terms. Through a comprehensive empirical evaluation across five real-world benchmark datasets, we demonstrate that the proposed method significantly mitigates bias across key fairness metrics while maintaining competitive node classification performance. Results show that our framework consistently outperforms baseline models in achieving a favorable fairness-accuracy trade-off, with minimal degradation in predictive accuracy. The datasets and source code will be publicly released at https://github.com/mtavassoli/GNN-FC.
Problem

Research questions and friction points this paper is trying to address.

Addresses fairness in GNNs with incomplete sensitive attributes
Proposes model-agnostic regularization for bias mitigation
Ensures fairness-accuracy trade-off in node classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-agnostic fairness regularization for GNNs
Handles incomplete sensitive attributes in graphs
Integrates equal opportunity and statistical parity terms
🔎 Similar Papers
No similar papers found.
M
M. Tavassoli Kejani
Institut de Mathématiques de Toulouse
Fadi Dornaika
Fadi Dornaika
IKERBASQUE Research Foundation
computer visionpattern recognitionmachine learning
J
J. M. Loubes
Institut de Mathématiques de Toulouse, INRIA, Regalia Team