Mitigating Bias in Graph Hyperdimensional Computing

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic study of algorithmic fairness in Graph Hyperdimensional Computing (Graph HDC). Addressing implicit group biases embedded in hypervector representations and decision rules, we propose FairGHDC—a fairness-aware framework that introduces a differentiable, gap-based group fairness regularizer. This regularizer is converted into a scalar modulation factor and integrated into a backpropagation-free, hypervector-like update mechanism, enabling efficient bias mitigation without modifying the encoder. FairGHDC preserves HDC’s intrinsic training efficiency while simultaneously ensuring representation fairness and discriminative capability. Evaluated on six benchmark graph datasets, FairGHDC achieves substantial fairness improvements—reducing statistical parity difference (ΔSP ≤ 0.03) and equalized odds difference (ΔEO ≤ 0.04)—while matching the classification accuracy of state-of-the-art GNNs and fairness-aware GNNs. Moreover, it accelerates training by approximately 10× compared to gradient-based alternatives.

Technology Category

Application Category

📝 Abstract
Graph hyperdimensional computing (HDC) has emerged as a promising paradigm for cognitive tasks, emulating brain-like computation with high-dimensional vectors known as hypervectors. While HDC offers robustness and efficiency on graph-structured data, its fairness implications remain largely unexplored. In this paper, we study fairness in graph HDC, where biases in data representation and decision rules can lead to unequal treatment of different groups. We show how hypervector encoding and similarity-based classification can propagate or even amplify such biases, and we propose a fairness-aware training framework, FairGHDC, to mitigate them. FairGHDC introduces a bias correction term, derived from a gap-based demographic-parity regularizer, and converts it into a scalar fairness factor that scales the update of the class hypervector for the ground-truth label. This enables debiasing directly in the hypervector space without modifying the graph encoder or requiring backpropagation. Experimental results on six benchmark datasets demonstrate that FairGHDC substantially reduces demographic-parity and equal-opportunity gaps while maintaining accuracy comparable to standard GNNs and fairness-aware GNNs. At the same time, FairGHDC preserves the computational advantages of HDC, achieving up to about one order of magnitude ($approx 10 imes$) speedup in training time on GPU compared to GNN and fairness-aware GNN baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses fairness issues in graph hyperdimensional computing
Mitigates bias propagation in hypervector encoding and classification
Proposes a training framework to reduce demographic-parity gaps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces a fairness-aware training framework called FairGHDC
Uses a bias correction term derived from demographic-parity regularizer
Debiases directly in hypervector space without modifying graph encoder
🔎 Similar Papers
No similar papers found.