Unifying Re-Identification, Attribute Inference, and Data Reconstruction Risks in Differential Privacy

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing differential privacy (DP) mechanisms lack a principled mapping from standard privacy parameters (e.g., ε, α) to concrete adversarial risks—such as re-identification, attribute inference, and data reconstruction—resulting in poor interpretability and inconsistent calibration. This work introduces the first unified risk characterization framework grounded in statistical hypothesis testing, establishing statistical significance as a unifying theoretical link across these three fundamental privacy attacks. The framework is inherently compatible with *f*-DP and seamlessly subsumes ε-DP, Rényi DP, and concentrated DP. Under identical benchmark risk levels, our approach reduces noise injection by approximately 20% compared to state-of-the-art DP mechanisms, yielding over a 15-percentage-point improvement in text classification accuracy. These gains significantly enhance both practical utility and interpretability of privacy guarantees.

Technology Category

Application Category

📝 Abstract
Differentially private (DP) mechanisms are difficult to interpret and calibrate because existing methods for mapping standard privacy parameters to concrete privacy risks -- re-identification, attribute inference, and data reconstruction -- are both overly pessimistic and inconsistent. In this work, we use the hypothesis-testing interpretation of DP ($f$-DP), and determine that bounds on attack success can take the same unified form across re-identification, attribute inference, and data reconstruction risks. Our unified bounds are (1) consistent across a multitude of attack settings, and (2) tunable, enabling practitioners to evaluate risk with respect to arbitrary (including worst-case) levels of baseline risk. Empirically, our results are tighter than prior methods using $varepsilon$-DP, Rényi DP, and concentrated DP. As a result, calibrating noise using our bounds can reduce the required noise by 20% at the same risk level, which yields, e.g., more than 15pp accuracy increase in a text classification task. Overall, this unifying perspective provides a principled framework for interpreting and calibrating the degree of protection in DP against specific levels of re-identification, attribute inference, or data reconstruction risk.
Problem

Research questions and friction points this paper is trying to address.

Unify privacy risk metrics in differential privacy
Improve interpretability of DP mechanisms
Reduce noise while maintaining privacy protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified bounds for multiple privacy risks
Tunable risk evaluation for arbitrary baselines
Reduced noise with tighter empirical bounds
🔎 Similar Papers
No similar papers found.