GReDP: A More Robust Approach for Differential Private Training with Gradient-Preserving Noise Reduction

📅 2024-09-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the significant utility degradation in differentially private deep learning caused by excessive noise injection, this paper proposes a frequency-domain differentially private gradient optimization method. First, gradients are transformed into the frequency domain, enabling effective noise suppression while preserving all original gradient information. Second, an adaptive frequency-domain noise scaling mechanism is introduced, with rigorous privacy budget analysis grounded in Rényi differential privacy theory. Our approach achieves end-to-end differential privacy guarantees using only half the noise scale required by conventional DPSGD. Extensive experiments across multiple models and benchmark datasets demonstrate consistent superiority over DPSGD and other baselines: test accuracy improves by 3.2–5.7 percentage points on average, while maintaining identical (ε, δ)-differential privacy guarantees.

Technology Category

Application Category

📝 Abstract
Deep learning models have been extensively adopted in various regions due to their ability to represent hierarchical features, which highly rely on the training set and procedures. Thus, protecting the training process and deep learning algorithms is paramount in privacy preservation. Although Differential Privacy (DP) as a powerful cryptographic primitive has achieved satisfying results in deep learning training, the existing schemes still fall short in preserving model utility, i.e., they either invoke a high noise scale or inevitably harm the original gradients. To address the above issues, in this paper, we present a more robust approach for DP training called GReDP. Specifically, we compute the model gradients in the frequency domain and adopt a new approach to reduce the noise level. Unlike the previous work, our GReDP only requires half of the noise scale compared to DPSGD [1] while keeping all the gradient information intact. We present a detailed analysis of our method both theoretically and empirically. The experimental results show that our GReDP works consistently better than the baselines on all models and training settings.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy preservation in deep learning training.
Reduces noise scale while preserving gradient information.
Improves model utility with provable security guarantees.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient computation in frequency domain
Reduced noise scale by half
Preserves all gradient information
🔎 Similar Papers
No similar papers found.
H
Haodi Wang
Department of Computer Science, City University of Hong Kong; Lab for Artificial Intelligence Powered FinTech, HK SAR
T
Tangyu Jiang
Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
Y
Yu Guo
School of Artificial Intelligence, Beijing Normal University
C
Chengjun Cai
Department of Computer Science, City University of Hong Kong (Dongguan)
C
Cong Wang
Department of Computer Science, City University of Hong Kong
Xiaohua Jia
Xiaohua Jia
Chinese Academy of Science