Interpreting the Error of Differentially Private Median Queries through Randomization Intervals

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of interpreting differentially private median queries, where noise injection often compromises result interpretability. Existing approaches typically sacrifice median utility to obtain high-quality randomized intervals. To overcome this trade-off, we propose PostRI, the first method that decouples median estimation from randomized interval construction. PostRI first computes a differentially private median estimate and then adaptively generates a tight randomized interval based on posterior error analysis. By avoiding additional perturbation to the median itself, PostRI significantly enhances statistical utility while maintaining reasonable interval width. Empirical evaluations demonstrate that our approach improves median estimation accuracy by 14% to 850% compared to state-of-the-art methods.
📝 Abstract
It can be difficult for practitioners to interpret the quality of differentially private (DP) statistics due to the added noise. One method to help analysts understand the amount of error introduced by DP is to return a Randomization Interval (RI), along with the statistic. A RI is a type of confidence interval that bounds the error introduced by DP. For queries where the noise distribution depends on the input, such as the median, prior work degrades the quality of the median itself to obtain a high-quality RI. In this work, we propose PostRI, a solution to compute a RI after the median has been estimated. PostRI enables a median estimation with 14%-850% higher utility than related work, while maintaining a narrow RI.
Problem

Research questions and friction points this paper is trying to address.

differentially private median
randomization interval
error interpretation
privacy-preserving statistics
utility degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

PostRI
Randomization Interval
Differentially Private Median
Utility Improvement
Confidence Interval
🔎 Similar Papers
No similar papers found.
Thomas Humphries
Thomas Humphries
University of Waterloo
Differential PrivacySecure ComputationGenetic Algorithms
T
Tim Li
University of Waterloo
S
Shufan Zhang
University of Waterloo
K
Karl Knopf
University of Waterloo
Xi He
Xi He
University of Waterloo
PrivacySecurityDatabaseLocation data