FuSeFL: Fully Secure and Scalable Cross-Silo Federated Learning

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing secure federated learning (FL) schemes rely on homomorphic encryption, differential privacy, or traditional multi-party computation (MPC), struggling to simultaneously achieve strong security and scalability due to prohibitive computational/communication overhead and—critically—the neglect of global model confidentiality, hindering cross-institutional deployment. This paper proposes a fully secure, scalable, decentralized FL framework built upon lightweight MPC: local training and encrypted gradient updates are performed pairwise among clients, while the server performs only secure aggregation—never accessing raw data, model parameters, or gradients. Crucially, we formally incorporate global model secrecy into the FL security boundary for the first time. Experiments demonstrate that, compared to state-of-the-art approaches, our framework reduces communication latency by 95%, cuts server memory usage by 50%, and improves model accuracy—validating the practical feasibility of achieving both provable security and high efficiency at scale.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative model training without centralizing client data, making it attractive for privacy-sensitive domains. While existing approaches employ cryptographic techniques such as homomorphic encryption, differential privacy, or secure multiparty computation to mitigate inference attacks-including model inversion, membership inference, and gradient leakage-they often suffer from high computational, communication, or memory overheads. Moreover, many methods overlook the confidentiality of the global model itself, which may be proprietary and sensitive. These challenges limit the practicality of secure FL, especially in cross-silo deployments involving large datasets and strict compliance requirements. We present FuSeFL, a fully secure and scalable FL scheme designed for cross-silo settings. FuSeFL decentralizes training across client pairs using lightweight secure multiparty computation (MPC), while confining the server's role to secure aggregation. This design eliminates server bottlenecks, avoids data offloading, and preserves full confidentiality of data, model, and updates throughout training. FuSeFL defends against inference threats, achieves up to 95% lower communication latency and 50% lower server memory usage, and improves accuracy over prior secure FL solutions, demonstrating strong security and efficiency at scale.
Problem

Research questions and friction points this paper is trying to address.

Mitigates inference attacks in federated learning efficiently
Ensures confidentiality of global model and client data
Reduces communication and memory overhead in cross-silo FL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized training using lightweight secure MPC
Server role limited to secure aggregation
Reduces communication latency and server memory usage
🔎 Similar Papers
No similar papers found.