🤖 AI Summary
How to perform Bayesian hypothesis testing on confidential data while preserving individual privacy and maintaining interpretability and statistical power. Method: We propose the first framework for constructing differentially private Bayes factors, avoiding assumptions about the full data-generating mechanism; instead, we design privacy-preserving Bayes factors based on common summary statistics and derive sufficient conditions for their asymptotic consistency. Contributions/Results: (1) We establish the first theoretical foundation for differentially private Bayesian hypothesis testing; (2) we develop a computationally efficient, model-agnostic construction of private Bayes factors; (3) empirical evaluations demonstrate that, under limited privacy budgets, our method substantially outperforms differentially private *p*-value approaches—achieving higher statistical power and more robust quantification of evidential strength.
📝 Abstract
Differential privacy has emerged as an significant cornerstone in the realm of scientific hypothesis testing utilizing confidential data. In reporting scientific discoveries, Bayesian tests are widely adopted since they effectively circumnavigate the key criticisms of P-values, namely, lack of interpretability and inability to quantify evidence in support of the competing hypotheses. We present a novel differentially private Bayesian hypotheses testing framework that arise naturally under a principled data generative mechanism, inherently maintaining the interpretability of the resulting inferences. Furthermore, by focusing on differentially private Bayes factors based on widely used test statistics, we circumvent the need to model the complete data generative mechanism and ensure substantial computational benefits. We also provide a set of sufficient conditions to establish results on Bayes factor consistency under the proposed framework. The utility of the devised technology is showcased via several numerical experiments.