🤖 AI Summary
To address practical challenges in federated learning—including privacy leakage, malicious server attacks (e.g., model inconsistency attacks), and dynamic client disconnections—this paper proposes a robust and secure aggregation framework. The framework leverages lightweight cryptographic primitives to realize, for the first time, a single-initialization, interaction-free, and dynamically disconnect-resilient secure aggregation protocol, with formal security proofs under both semi-honest and malicious adversary models. Key technical components include lightweight masking encryption, a middleware proxy architecture, an efficient key-agreement protocol, and a model-parameter consistency verification mechanism. Experimental evaluations demonstrate that the framework significantly reduces communication and computational overhead compared to state-of-the-art approaches, while achieving superior performance in security guarantees, functional completeness, and practical deployability.
📝 Abstract
Federated Learning (FL) allows users to collaboratively train a global machine learning model by sharing local model only, without exposing their private data to a central server. This distributed learning is particularly appealing in scenarios where data privacy is crucial, and it has garnered substantial attention from both industry and academia. However, studies have revealed privacy vulnerabilities in FL, where adversaries can potentially infer sensitive information from the shared model parameters. In this paper, we present an efficient masking-based secure aggregation scheme utilizing lightweight cryptographic primitives to mitigate privacy risks. Our scheme offers several advantages over existing methods. First, it requires only a single setup phase for the entire FL training session, significantly reducing communication overhead. Second, it minimizes user-side overhead by eliminating the need for user-to-user interactions, utilizing an intermediate server layer and a lightweight key negotiation method. Third, the scheme is highly resilient to user dropouts, and the users can join at any FL round. Fourth, it can detect and defend against malicious server activities, including recently discovered model inconsistency attacks. Finally, our scheme ensures security in both semi-honest and malicious settings. We provide security analysis to formally prove the robustness of our approach. Furthermore, we implemented an end-to-end prototype of our scheme. We conducted comprehensive experiments and comparisons, which show that it outperforms existing solutions in terms of communication and computation overhead, functionality, and security.