On Joint Noise Scaling in Differentially Private Federated Learning with Multiple Local Steps

📅 2024-07-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, integrating multi-step local optimization with differential privacy (DP) and secure aggregation (SecAgg) remains challenging: conventional approaches enforce DP compliance per communication round, severely restricting the number of local updates and degrading model utility. This paper introduces the first end-to-end analytical framework jointly guaranteeing DP and SecAgg for multi-step local SGD. Leveraging a novel noise scaling mechanism, it dynamically allocates the privacy budget across local iterations without compromising SecAgg’s cryptographic security, thereby achieving strict global ε-DP. Theoretically, the framework improves privacy budget utilization efficiency. Empirically, on CIFAR-10, it achieves up to a 3.2% accuracy gain under identical communication rounds—simultaneously enhancing communication efficiency, model utility, and formal privacy guarantees.

Technology Category

Application Category

📝 Abstract
Federated learning is a distributed learning setting where the main aim is to train machine learning models without having to share raw data but only what is required for learning. To guarantee training data privacy and high-utility models, differential privacy and secure aggregation techniques are often combined with federated learning. However, with fine-grained protection granularities the currently existing techniques require the parties to communicate for each local optimisation step, if they want to fully benefit from the secure aggregation in terms of the resulting formal privacy guarantees. In this paper, we show how a simple new analysis allows the parties to perform multiple local optimisation steps while still benefiting from joint noise scaling when using secure aggregation. We show that our analysis enables higher utility models with guaranteed privacy protection under limited number of communication rounds.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy in federated learning with secure aggregation
Reduce communication rounds while maintaining differential privacy
Improve model utility with multiple local optimization steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines secure aggregation with differential privacy
Enables multiple local optimization steps
Improves utility with limited communication rounds
🔎 Similar Papers
No similar papers found.
M
Mikko A. Heikkilä
Telefónica Research