🤖 AI Summary
This work addresses the excessive noise in differentially private (DP) linear queries—such as sum, mean, and count—caused by high global sensitivity. We propose a noise-reduction method based on simplex projection: mapping high-sensitivity data onto a fixed-norm probability simplex to reuse privacy loss without increasing the ε budget. We first identify a “free lunch” phenomenon for linear queries on the simplex: redundant queries can be answered with zero additional privacy cost. Leveraging sensitivity analysis, decomposition of linear queries, and algebraic reconstruction, we theoretically prove—and empirically validate—that our method reduces the variance of DP estimates by a factor of O(n) under ε-DP, significantly improving accuracy for mean and sum estimation while preserving strict privacy guarantees.
📝 Abstract
We show that the most well-known and fundamental building blocks of DP implementations -- sum, mean, count (and many other linear queries) -- can be released with substantially reduced noise for the same privacy guarantee. We achieve this by projecting individual data with worst-case sensitivity $R$ onto a simplex where all data now has a constant norm $R$. In this simplex, additional ``free'' queries can be run that are already covered by the privacy-loss of the original budgeted query, and which algebraically give additional estimates of counts or sums.