Learning with Differentially Private (Sliced) Wasserstein Gradients

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of differential privacy (DP) guarantees in Wasserstein distance-driven machine learning tasks. Methodologically, it introduces the first framework for computing Wasserstein gradients under strict DP: (i) it derives the first closed-form expression for the discrete Wasserstein gradient and establishes a tight sensitivity bound; (ii) it adapts DP-SGD to the non-finite-sum structure of optimal transport objectives, integrating sliced-Wasserstein approximation, gradient/activation clipping, and Rényi differential privacy accounting to achieve efficient privacy–utility trade-offs. Empirically, on generative modeling and distribution alignment benchmarks, the method attains performance comparable to non-private baselines with modest privacy budgets (ε ≈ 2–4). Theoretical analysis provides rigorous end-to-end DP guarantees, and the framework scales to large-scale training settings.

Technology Category

Application Category

📝 Abstract
In this work, we introduce a novel framework for privately optimizing objectives that rely on Wasserstein distances between data-dependent empirical measures. Our main theoretical contribution is, based on an explicit formulation of the Wasserstein gradient in a fully discrete setting, a control on the sensitivity of this gradient to individual data points, allowing strong privacy guarantees at minimal utility cost. Building on these insights, we develop a deep learning approach that incorporates gradient and activations clipping, originally designed for DP training of problems with a finite-sum structure. We further demonstrate that privacy accounting methods extend to Wasserstein-based objectives, facilitating large-scale private training. Empirical results confirm that our framework effectively balances accuracy and privacy, offering a theoretically sound solution for privacy-preserving machine learning tasks relying on optimal transport distances such as Wasserstein distance or sliced-Wasserstein distance.
Problem

Research questions and friction points this paper is trying to address.

Private optimization of Wasserstein distances
Control gradient sensitivity for privacy
Balance accuracy and privacy in learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentially Private Wasserstein Gradients
Gradient and Activations Clipping
Privacy Accounting for Wasserstein Objectives
🔎 Similar Papers
No similar papers found.