Infinitely Divisible Noise for Differential Privacy: Nearly Optimal Error in the High $varepsilon$ Regime

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses error optimization for distributed differential privacy (DP) under high privacy budgets (ε ≥ 1). We propose two infinitely divisible additive noise mechanisms: the Generalized Discrete Laplace (GDL) and the Multi-Scale Discrete Laplace (MSDLap). MSDLap is the first infinitely divisible mechanism achieving tight theoretical error lower bounds—query-agnostic and satisfying pure DP. We further design a discrete-to-continuous noise mapping, integrated with exact sampling and multi-message Shuffle DP protocols, attaining optimal mean squared error (MSE) of order O(Δ²e⁻²ε/³), significantly improving upon the prior Arete mechanism. This is the first construction achieving order-optimal MSE under infinite divisibility constraints, demonstrating that this structural requirement incurs no utility loss. Our implementation is open-sourced for efficiency and reproducibility.

Technology Category

Application Category

📝 Abstract
Differential privacy (DP) can be achieved in a distributed manner, where multiple parties add independent noise such that their sum protects the overall dataset with DP. A common technique here is for each party to sample their noise from the decomposition of an infinitely divisible distribution. We analyze two mechanisms in this setting: 1) the generalized discrete Laplace (GDL) mechanism, whose distribution (which is closed under summation) follows from differences of i.i.d. negative binomial shares, and 2) the multi-scale discrete Laplace (MSDLap) mechanism, a novel mechanism following the sum of multiple i.i.d. discrete Laplace shares at different scales. For $varepsilon geq 1$, our mechanisms can be parameterized to have $Oleft(Delta^3 e^{-varepsilon} ight)$ and $Oleft(minleft(Delta^3 e^{-varepsilon}, Delta^2 e^{-2varepsilon/3} ight) ight)$ MSE, respectively, where $Delta$ denote the sensitivity; the latter bound matches known optimality results. We also show a transformation from the discrete setting to the continuous setting, which allows us to transform both mechanisms to the continuous setting and thereby achieve the optimal $Oleft(Delta^2 e^{-2varepsilon / 3} ight)$ MSE. To our knowledge, these are the first infinitely divisible additive noise mechanisms that achieve order-optimal MSE under pure DP, so our work shows formally there is no separation in utility when query-independent noise adding mechanisms are restricted to infinitely divisible noise. For the continuous setting, our result improves upon the Arete mechanism from [Pagh and Stausholm, ALT 2022] which gives an MSE of $Oleft(Delta^2 e^{-varepsilon/4} ight)$. Furthermore, we give an exact sampler tuned to efficiently implement the MSDLap mechanism, and we apply our results to improve a state of the art multi-message shuffle DP protocol in the high $varepsilon$ regime.
Problem

Research questions and friction points this paper is trying to address.

Achieving optimal MSE in differential privacy mechanisms
Analyzing infinitely divisible noise for distributed DP
Improving error bounds in high privacy regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses infinitely divisible noise for differential privacy
Introduces multi-scale discrete Laplace mechanism
Achieves optimal mean squared error in continuous setting
🔎 Similar Papers
No similar papers found.