Privacy-Preserving Federated Learning against Malicious Clients Based on Verifiable Functional Encryption

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces dual threats from model inversion attacks and malicious client poisoning, while preserving data privacy; existing defenses often rely on trusted third parties or non-colluding dual-server assumptions, limiting practicality. This paper proposes the first decentralized verifiable functional encryption (DVFE) framework, enabling multi-dimensional ciphertext relationship verification without any trusted components. By integrating secure multi-party computation with a robust aggregation mechanism, our approach simultaneously achieves malicious client detection, model inversion resistance, and high-accuracy model training. We formalize the security model and provide rigorous proofs, demonstrating that the framework satisfies strong privacy guarantees, verifiability, and model fidelity constraints. Empirical evaluation shows significantly improved robustness against malicious clients compared to prior methods relying on trusted assumptions, while maintaining computational efficiency and scalability.

Technology Category

Application Category

📝 Abstract
Federated learning is a promising distributed learning paradigm that enables collaborative model training without exposing local client data, thereby protect data privacy. However, it also brings new threats and challenges. The advancement of model inversion attacks has rendered the plaintext transmission of local models insecure, while the distributed nature of federated learning makes it particularly vulnerable to attacks raised by malicious clients. To protect data privacy and prevent malicious client attacks, this paper proposes a privacy-preserving federated learning framework based on verifiable functional encryption, without a non-colluding dual-server setup or additional trusted third-party. Specifically, we propose a novel decentralized verifiable functional encryption (DVFE) scheme that enables the verification of specific relationships over multi-dimensional ciphertexts. This scheme is formally treated, in terms of definition, security model and security proof. Furthermore, based on the proposed DVFE scheme, we design a privacy-preserving federated learning framework VFEFL that incorporates a novel robust aggregation rule to detect malicious clients, enabling the effective training of high-accuracy models under adversarial settings. Finally, we provide formal analysis and empirical evaluation of the proposed schemes. The results demonstrate that our approach achieves the desired privacy protection, robustness, verifiability and fidelity, while eliminating the reliance on non-colluding dual-server settings or trusted third parties required by existing methods.
Problem

Research questions and friction points this paper is trying to address.

Prevents malicious client attacks in federated learning
Ensures privacy with verifiable functional encryption
Eliminates need for trusted third parties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized verifiable functional encryption scheme
Robust aggregation rule for malicious clients
No dual-server or trusted third-party needed
🔎 Similar Papers
No similar papers found.