Technical Report: Full Version of Analyzing and Optimizing Perturbation of DP-SGD Geometrically

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
DP-SGD injects isotropic noise directly into the gradient space, disproportionately perturbing gradient directions relative to magnitudes—severely degrading convergence efficiency—yet its geometric root cause remained uncharacterized. Method: This paper establishes, for the first time, a geometric characterization proving that directional perturbation is the primary source of performance degradation. Building on this insight, we propose GeoDP: a novel differentially private training framework that decouples gradient direction and magnitude, preserving directional consistency while applying noise solely to the magnitude—under the same ε-DP budget. Contribution/Results: We provide rigorous theoretical analysis and extensive experiments across MNIST, CIFAR-10, and synthetic datasets with Logistic Regression, CNN, and ResNet architectures. GeoDP achieves significantly faster convergence and higher final accuracy (average improvement of +2.1%), while strictly satisfying ε-DP. It introduces a provably superior, geometry-aware design paradigm for privacy-preserving machine learning.

Technology Category

Application Category

📝 Abstract
Differential privacy (DP) has become a prevalent privacy model in a wide range of machine learning tasks, especially after the debut of DP-SGD. However, DP-SGD, which directly perturbs gradients in the training iterations, fails to mitigate the negative impacts of noise on gradient direction. As a result, DP-SGD is often inefficient. Although various solutions (e.g., clipping to reduce the sensitivity of gradients and amplifying privacy bounds to save privacy budgets) are proposed to trade privacy for model efficiency, the root cause of its inefficiency is yet unveiled. In this work, we first generalize DP-SGD and theoretically derive the impact of DP noise on the training process. Our analysis reveals that, in terms of a perturbed gradient, only the noise on direction has eminent impact on the model efficiency while that on magnitude can be mitigated by optimization techniques, i.e., fine-tuning gradient clipping and learning rate. Besides, we confirm that traditional DP introduces biased noise on the direction when adding unbiased noise to the gradient itself. Overall, the perturbation of DP-SGD is actually sub-optimal from a geometric perspective. Motivated by this, we design a geometric perturbation strategy GeoDP within the DP framework, which perturbs the direction and the magnitude of a gradient, respectively. By directly reducing the noise on the direction, GeoDP mitigates the negative impact of DP noise on model efficiency with the same DP guarantee. Extensive experiments on two public datasets (i.e., MNIST and CIFAR-10), one synthetic dataset and three prevalent models (i.e., Logistic Regression, CNN and ResNet) confirm the effectiveness and generality of our strategy.
Problem

Research questions and friction points this paper is trying to address.

Analyzes DP-SGD's inefficiency due to gradient direction noise
Proposes GeoDP to separately perturb gradient direction and magnitude
Validates GeoDP's effectiveness on multiple datasets and models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalizes DP-SGD to analyze noise impact geometrically
Proposes GeoDP strategy for separate gradient direction perturbation
Reduces directional noise while maintaining DP guarantees
🔎 Similar Papers
No similar papers found.