Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning

📅 2024-06-26
🏛️ International Symposium on Computers and Communications
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of simultaneously ensuring privacy preservation and defending against model poisoning attacks in federated learning, this paper proposes a collaborative defense framework that jointly guarantees privacy, security, and robustness. Methodologically, it integrates secure aggregation to protect client data privacy; introduces a reputation-based dynamic participation mechanism coupled with model divergence analysis to detect malicious clients; and incorporates robust aggregation strategies—including Trimmed Mean and Median—along with post-attack training recovery capabilities. Evaluated in a Docker-based distributed simulation environment, the framework achieves stable convergence under model poisoning attacks, outperforms FedAvg and Power-of-Choice in convergence speed, and significantly enhances system security and robustness. The key innovation lies in the first holistic integration of reputation assessment, secure aggregation, and elastic recovery mechanisms—thereby unifying privacy protection, attack resilience, and training efficiency.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is a distributed training paradigm wherein participants collaborate to build a global model while ensuring the privacy of the involved data, which remains stored on participant devices. However, proposals aiming to ensure such privacy also make it challenging to protect against potential attackers seeking to compromise the training outcome. In this context, we present Fast, Private, and Protected (FPP), a novel approach that aims to safeguard federated training while enabling secure aggregation to preserve data privacy. This is accomplished by evaluating rounds using participants’ assessments and enabling training recovery after an attack. FPP also employs a reputation-based mechanism to mitigate the participation of attackers. We created a dockerized environment to validate the performance of FPP compared to other approaches in the literature (FedAvg, Power-of-Choice, and aggregation via Trimmed Mean and Median). Our experiments demonstrate that FPP achieves a rapid convergence rate and can converge even in the presence of malicious participants performing model poisoning attacks.
Problem

Research questions and friction points this paper is trying to address.

Safeguarding data privacy in federated learning against attacks
Defending against model poisoning from malicious participants
Enabling secure aggregation and training recovery mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Secure aggregation to preserve data privacy
Training recovery mechanism after attacks
Reputation-based mechanism to mitigate attackers
🔎 Similar Papers
No similar papers found.
N
Nicolas Riccieri Gardin Assumpcao
Institute of Computing, State University of Campinas, Campinas, Brazil
Leandro Villas
Leandro Villas
UNICAMP
Distributed SystemsMachine Learning