🤖 AI Summary
This paper addresses privacy guarantees for histogram publication under the Laplace mechanism. It proposes a context-aware privacy analysis framework based on pointwise maximal leakage (PML), departing from the distribution-agnostic assumptions of standard differential privacy. By incorporating prior knowledge of the true data distribution, the framework derives tighter privacy bounds: when histogram bin probabilities are bounded away from zero, strictly stronger privacy is achieved at the same noise level. Theoretical analysis and empirical evaluation demonstrate that the method significantly improves privacy precision without sacrificing utility, yielding a superior privacy–utility trade-off. The core contribution lies in systematically integrating data distribution knowledge into PML-based analysis, establishing a novel paradigm for context-sensitive privacy quantification.
📝 Abstract
We analyze the privacy guarantees of the Laplace mechanism releasing the histogram of a dataset through the lens of pointwise maximal leakage (PML). While differential privacy is commonly used to quantify the privacy loss, it is a context-free definition that does not depend on the data distribution. In contrast, PML enables a more refined analysis by incorporating assumptions about the data distribution. We show that when the probability of each histogram bin is bounded away from zero, stronger privacy protection can be achieved for a fixed level of noise. Our results demonstrate the advantage of context-aware privacy measures and show that incorporating assumptions about the data can improve privacy-utility tradeoffs.