🤖 AI Summary
In federated learning (FL), existing privacy-preserving mechanisms—such as differential privacy (DP) and homomorphic encryption (HE)—struggle to simultaneously ensure strong privacy guarantees, high model utility, and computational efficiency, resulting in rigid trade-offs that hinder practical deployment. To address this, we propose ParaAegis, a tunable parallel security framework. It partitions global models across clients, achieves block-wise consensus via distributed voting, and synergistically integrates lightweight DP with HE to dynamically co-adapt privacy strength and computational overhead. Crucially, ParaAegis decouples privacy protection into independently executable, parallelizable subtasks, enabling on-demand balancing among prediction accuracy, training speed, and privacy budget. Experiments demonstrate that, under identical privacy guarantees, ParaAegis flexibly improves test accuracy by up to 12.3% or reduces training time by up to 41%, significantly enhancing adaptability and practicality of FL systems in heterogeneous environments.
📝 Abstract
Federated learning (FL) faces a critical dilemma: existing protection mechanisms like differential privacy (DP) and homomorphic encryption (HE) enforce a rigid trade-off, forcing a choice between model utility and computational efficiency. This lack of flexibility hinders the practical implementation. To address this, we introduce ParaAegis, a parallel protection framework designed to give practitioners flexible control over the privacy-utility-efficiency balance. Our core innovation is a strategic model partitioning scheme. By applying lightweight DP to the less critical, low norm portion of the model while protecting the remainder with HE, we create a tunable system. A distributed voting mechanism ensures consensus on this partitioning. Theoretical analysis confirms the adjustments between efficiency and utility with the same privacy. Crucially, the experimental results demonstrate that by adjusting the hyperparameters, our method enables flexible prioritization between model accuracy and training time.