Don't Hash Me Like That: Exposing and Mitigating Hash-Induced Unfairness in Local Differential Privacy

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies and formalizes, for the first time, “hash-induced unfairness” in Local Differential Privacy (LDP): even under identical protocols and privacy budgets, different hash functions induce substantial disparities in per-user security strength, exacerbating unequal vulnerability to inference and poisoning attacks. To address this, we propose an entropy-constrained fair hash selection mechanism and design Fair-OLH—a novel LDP protocol that jointly optimizes perturbation and encoding at the user side. Experiments demonstrate that Fair-OLH significantly reduces inter-user variance in attack success rates (average reduction of 42.6%) with acceptable computational overhead, thereby enhancing both fairness and robustness of LDP systems. Our work establishes a new paradigm for security evaluation and protocol design in LDP, shifting focus from aggregate privacy guarantees to equitable protection across users.

Technology Category

Application Category

📝 Abstract
Local differential privacy (LDP) has become a widely accepted framework for privacy-preserving data collection. In LDP, many protocols rely on hash functions to implement user-side encoding and perturbation. However, the security and privacy implications of hash function selection have not been previously investigated. In this paper, we expose that the hash functions may act as a source of unfairness in LDP protocols. We show that although users operate under the same protocol and privacy budget, differences in hash functions can lead to significant disparities in vulnerability to inference and poisoning attacks. To mitigate hash-induced unfairness, we propose Fair-OLH (F-OLH), a variant of OLH that enforces an entropy-based fairness constraint on hash function selection. Experiments show that F-OLH is effective in mitigating hash-induced unfairness under acceptable time overheads.
Problem

Research questions and friction points this paper is trying to address.

Exposing hash-induced unfairness in LDP protocols
Mitigating disparities in vulnerability to attacks
Proposing Fair-OLH for entropy-based fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exposes hash-induced unfairness in LDP
Proposes Fair-OLH with entropy fairness
Mitigates unfairness with acceptable overhead
🔎 Similar Papers
No similar papers found.
B
Berkay Kemal Balioglu
Department of Computer Engineering, Koç University, Istanbul, Türkiye
A
Alireza Khodaie
Department of Computer Engineering, Koç University, Istanbul, Türkiye
M. Emre Gursoy
M. Emre Gursoy
Assistant Professor of Computer Science, Koç University
PrivacySecurityAI SecurityMachine LearningInternet of Things