🤖 AI Summary
To address the straggler problem caused by heterogeneous devices in federated learning (FL) and the high communication/computation overhead under differential privacy (DP) guarantees, this paper proposes LightDP-FL—a lightweight DP-compliant FL framework. Its core contributions are threefold: (1) a novel two-tier noise injection mechanism—individual-level noise at clients and pairwise noise during model aggregation—that achieves provable ε-DP even under untrusted servers and colluding peers; (2) derivation of the minimal noise variance required to tolerate worst-case stragglers and bounded collusion, based on theoretical upper bounds; and (3) joint optimization of the convergence bound to balance privacy budget allocation and model accuracy. Theoretical analysis establishes robust convergence under DP constraints. Extensive experiments on CIFAR-10 demonstrate that, under identical DP budgets, LightDP-FL achieves faster convergence, enhanced straggler resilience, higher test accuracy, and significantly reduced communication and computational overhead compared to state-of-the-art baselines.
📝 Abstract
Federated learning (FL) enables collaborative model training through model parameter exchanges instead of raw data. To avoid potential inference attacks from exchanged parameters, differential privacy (DP) offers rigorous guarantee against various attacks. However, conventional methods of ensuring DP by adding local noise alone often result in low training accuracy. Combining secure multi-party computation (SMPC) with DP, while improving the accuracy, incurs high communication and computation overheads as well as straggler vulnerability, in either client-to-server or client-to-client links. In this paper, we propose LightDP-FL, a novel lightweight scheme that ensures provable DP against untrusted peers and server, while maintaining straggler resilience, low overheads and high training accuracy. Our scheme incorporates both individual and pairwise noise into each client's parameter, which can be implemented with minimal overheads. Given the uncertain straggler and colluder sets, we utilize the upper bound on the numbers of stragglers and colluders to prove sufficient noise variance conditions to ensure DP in the worst case. Moreover, we optimize the expected convergence bound to ensure accuracy performance by flexibly controlling the noise variances. Using the CIFAR-10 dataset, our experimental results demonstrate that LightDP-FL achieves faster convergence and stronger straggler resilience compared to baseline methods of the same DP level.