Poisoning Attacks to Local Differential Privacy for Ranking Estimation

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper reveals a critical vulnerability of Local Differential Privacy (LDP) to poisoning attacks in ranking estimation: an adversary can deploy only a small number of sybil users to precisely manipulate item frequencies, distort ranking outcomes, and maximize personal gain. To formalize this threat, the authors propose a unified attack framework that defines attack cost and optimal target items. For three mainstream LDP protocols—k-RR, OUE, and OLH—they design multi-round iterative attack algorithms grounded in frequency perturbation analysis, hash preimage modeling, and confidence-driven optimization. Crucially, they introduce a confidence-level metric to quantify attack success probability. Theoretical analysis and extensive experiments demonstrate that the attack achieves significant rank distortion at low overhead. This work provides the first systematic characterization of the security boundaries of LDP-based ranking mechanisms, establishing foundational insights for designing robust defenses.

Technology Category

Application Category

📝 Abstract
Local differential privacy (LDP) involves users perturbing their inputs to provide plausible deniability of their data. However, this also makes LDP vulnerable to poisoning attacks. In this paper, we first introduce novel poisoning attacks for ranking estimation. These attacks are intricate, as fake attackers do not merely adjust the frequency of target items. Instead, they leverage a limited number of fake users to precisely modify frequencies, effectively altering item rankings to maximize gains. To tackle this challenge, we introduce the concepts of attack cost and optimal attack item (set), and propose corresponding strategies for kRR, OUE, and OLH protocols. For kRR, we iteratively select optimal attack items and allocate suitable fake users. For OUE, we iteratively determine optimal attack item sets and consider the incremental changes in item frequencies across different sets. Regarding OLH, we develop a harmonic cost function based on the pre-image of a hash to select that supporting a larger number of effective attack items. Lastly, we present an attack strategy based on confidence levels to quantify the probability of a successful attack and the number of attack iterations more precisely. We demonstrate the effectiveness of our attacks through theoretical and empirical evidence, highlighting the necessity for defenses against these attacks. The source code and data have been made available at https://github.com/LDP-user/LDP-Ranking.git.
Problem

Research questions and friction points this paper is trying to address.

Novel poisoning attacks on LDP ranking estimation
Optimizing attack strategies for kRR OUE OLH protocols
Quantifying attack success via confidence level strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces poisoning attacks for ranking estimation
Proposes attack strategies for kRR, OUE, OLH protocols
Develops harmonic cost function for OLH attacks
🔎 Similar Papers
No similar papers found.
P
Pei Zhan
School of Cyber Science and Technology, Shandong University, Qingdao, China
Peng Tang
Peng Tang
Meta
Multi-modal LLMVision LanguageComputer Vision
Y
Yangzhuo Li
School of Cyber Science and Technology, Shandong University, Qingdao, China
P
Puwen Wei
School of Cyber Science and Technology, Shandong University, Qingdao, China
Shanqing Guo
Shanqing Guo
Shandong University