🤖 AI Summary
To address the significant utility degradation in differentially private deep learning caused by excessive noise injection, this paper proposes a frequency-domain differentially private gradient optimization method. First, gradients are transformed into the frequency domain, enabling effective noise suppression while preserving all original gradient information. Second, an adaptive frequency-domain noise scaling mechanism is introduced, with rigorous privacy budget analysis grounded in Rényi differential privacy theory. Our approach achieves end-to-end differential privacy guarantees using only half the noise scale required by conventional DPSGD. Extensive experiments across multiple models and benchmark datasets demonstrate consistent superiority over DPSGD and other baselines: test accuracy improves by 3.2–5.7 percentage points on average, while maintaining identical (ε, δ)-differential privacy guarantees.
📝 Abstract
Deep learning models have been extensively adopted in various regions due to their ability to represent hierarchical features, which highly rely on the training set and procedures. Thus, protecting the training process and deep learning algorithms is paramount in privacy preservation. Although Differential Privacy (DP) as a powerful cryptographic primitive has achieved satisfying results in deep learning training, the existing schemes still fall short in preserving model utility, i.e., they either invoke a high noise scale or inevitably harm the original gradients. To address the above issues, in this paper, we present a more robust approach for DP training called GReDP. Specifically, we compute the model gradients in the frequency domain and adopt a new approach to reduce the noise level. Unlike the previous work, our GReDP only requires half of the noise scale compared to DPSGD [1] while keeping all the gradient information intact. We present a detailed analysis of our method both theoretically and empirically. The experimental results show that our GReDP works consistently better than the baselines on all models and training settings.