Differential privacy guarantees of Markov chain Monte Carlo algorithms

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical differential privacy (DP) guarantees for Markov chain Monte Carlo (MCMC) algorithms under DP constraints. We establish, for the first time, rigorous DP bounds—both for individual sample outputs and for Monte Carlo estimators—identifying the essential prerequisite that the target distribution itself must satisfy DP. Leveraging the Girsanov theorem and noise-perturbation analysis, we develop a novel analytical framework applicable to non-convex and unbounded settings. Within this framework, we derive uniform-in-$n$ Rényi DP bounds for $n$-step iterations, as well as trajectory-level Rényi DP bounds, for both the unadjusted Langevin algorithm (ULA) and stochastic gradient Langevin dynamics (SGLD). Our results fill a critical theoretical gap in privacy-preserving MCMC under realistic, non-ideal conditions, and provide actionable theoretical guidance—particularly for algorithm design and principled privacy–accuracy trade-offs—along with concrete parameter selection criteria.

Technology Category

Application Category

📝 Abstract
This paper aims to provide differential privacy (DP) guarantees for Markov chain Monte Carlo (MCMC) algorithms. In a first part, we establish DP guarantees on samples output by MCMC algorithms as well as Monte Carlo estimators associated with these methods under assumptions on the convergence properties of the underlying Markov chain. In particular, our results highlight the critical condition of ensuring the target distribution is differentially private itself. In a second part, we specialise our analysis to the unadjusted Langevin algorithm and stochastic gradient Langevin dynamics and establish guarantees on their (R'enyi) DP. To this end, we develop a novel methodology based on Girsanov's theorem combined with a perturbation trick to obtain bounds for an unbounded domain and in a non-convex setting. We establish: (i) uniform in $n$ privacy guarantees when the state of the chain after $n$ iterations is released, (ii) bounds on the privacy of the entire chain trajectory. These findings provide concrete guidelines for privacy-preserving MCMC.
Problem

Research questions and friction points this paper is trying to address.

Differential privacy for MCMC algorithms
Guarantees for Monte Carlo estimators
Privacy bounds for Langevin algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differential privacy for MCMC
Girsanov's theorem application
Unbounded domain privacy bounds
🔎 Similar Papers
No similar papers found.
Andrea Bertazzi
Andrea Bertazzi
Unknown affiliation
Machine learningComputational statistics
T
Tim Johnston
Université Paris Dauphine - PSL, France
G
Gareth O. Roberts
University of Warwick, UK
Alain Durmus
Alain Durmus
Ecole polytechnique
Machine learningStatistics