🤖 AI Summary
Differential privacy (DP) lacks intuitive interpretability from the perspective of statistical disclosure risk, hindering practitioners’ understanding and trustworthy deployment.
Method: This work establishes, for the first time, a rigorous theoretical linkage between DP parameters (ε, δ) and quantifiable disclosure risk. By integrating statistical inference theory, privacy analysis, and risk modeling, we derive tight upper bounds on an adversary’s worst-case success probability in inferring sensitive attributes under DP.
Contribution/Results: The framework endows ε and δ with concrete, risk-based semantics—interpreting them as guarantees on bounded inference risk. It further provides a risk-accumulation interpretation of composition theorems, enabling principled, scenario-aware selection and validation of privacy parameters. Our results significantly enhance the interpretability and credibility of DP, offering both theoretical foundations and actionable guidelines for privacy engineering practice.
📝 Abstract
As the use of differential privacy (DP) becomes widespread, the development of effective tools for reasoning about the privacy guarantee becomes increasingly critical. In pursuit of this goal, we demonstrate novel relationships between DP and measures of statistical disclosure risk. We suggest how experts and non-experts can use these results to explain the DP guarantee, interpret DP composition theorems, select and justify privacy parameters, and identify worst-case adversary prior probabilities.