Certification for Differentially Private Prediction in Gradient-Based Training

📅 2024-06-19
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Differential privacy (DP) gradient training suffers from excessive noise injection and suboptimal privacy–utility trade-offs due to reliance on global sensitivity, which is overly conservative for modern deep models. Method: This paper proposes the first scalable and verifiable framework for computing upper bounds on both local and smooth sensitivity—novelly integrating convex relaxation with interval-bound propagation to enable precise, efficient estimation of smooth sensitivity during gradient computation in contemporary deep neural networks. Contribution/Results: Our approach overcomes longstanding theoretical and computational barriers in rigorously bounding sensitivity. Experiments across financial risk assessment, medical image classification, and multi-task NLP demonstrate that our method reduces required noise magnitude by an order of magnitude, yielding substantial improvements in prediction accuracy and practical utility under identical privacy budgets (e.g., ε = 2, δ = 10⁻⁵). The framework provides stronger theoretical guarantees for private inference while ensuring engineering feasibility and scalability.

Technology Category

Application Category

📝 Abstract
Differential privacy upper-bounds the information leakage of machine learning models, yet providing meaningful privacy guarantees has proven to be challenging in practice. The private prediction setting where model outputs are privatized is being investigated as an alternate way to provide formal guarantees at prediction time. Most current private prediction algorithms, however, rely on global sensitivity for noise calibration, which often results in large amounts of noise being added to the predictions. Data-specific noise calibration, such as smooth sensitivity, could significantly reduce the amount of noise added, but were so far infeasible to compute exactly for modern machine learning models. In this work we provide a novel and practical approach based on convex relaxation and bound propagation to compute a provable upper-bound for the local and smooth sensitivity of a prediction. This bound allows us to reduce the magnitude of noise added or improve privacy accounting in the private prediction setting. We validate our framework on datasets from financial services, medical image classification, and natural language processing and across models and find our approach to reduce the noise added by up to order of magnitude.
Problem

Research questions and friction points this paper is trying to address.

Achieving differential privacy in prediction via noise addition
Improving privacy-utility trade-offs with dataset-specific sensitivity bounds
Enhancing private prediction accuracy in medical and NLP tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses convex relaxation for sensitivity bounds
Applies smooth sensitivity mechanism
Improves privacy-utility trade-offs significantly
🔎 Similar Papers
No similar papers found.