FedNIA: Noise-Induced Activation Analysis for Mitigating Data Poisoning in FL

📅 2025-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning faces severe threats from coordinated data poisoning attacks launched by multiple malicious clients; existing defenses often rely on centralized validation datasets or suffer from poor generalization. This paper proposes a noise-induced activation analysis framework that requires no trusted validation data: it injects random noise into client models to characterize layer-wise activation response patterns, then integrates unsupervised anomaly detection via autoencoders with behavioral fingerprinting to identify and prune malicious clients. To our knowledge, this is the first defense capable of simultaneously mitigating sample poisoning, label-flipping, and backdoor attacks under Non-IID data distributions. Extensive experiments demonstrate that, under standard Non-IID settings, the method achieves over 92% defense success rate against diverse poisoning attacks, significantly enhancing the robustness of the global model.

Technology Category

Application Category

📝 Abstract
Federated learning systems are increasingly threatened by data poisoning attacks, where malicious clients compromise global models by contributing tampered updates. Existing defenses often rely on impractical assumptions, such as access to a central test dataset, or fail to generalize across diverse attack types, particularly those involving multiple malicious clients working collaboratively. To address this, we propose Federated Noise-Induced Activation Analysis (FedNIA), a novel defense framework to identify and exclude adversarial clients without relying on any central test dataset. FedNIA injects random noise inputs to analyze the layerwise activation patterns in client models leveraging an autoencoder that detects abnormal behaviors indicative of data poisoning. FedNIA can defend against diverse attack types, including sample poisoning, label flipping, and backdoors, even in scenarios with multiple attacking nodes. Experimental results on non-iid federated datasets demonstrate its effectiveness and robustness, underscoring its potential as a foundational approach for enhancing the security of federated learning systems.
Problem

Research questions and friction points this paper is trying to address.

Mitigate data poisoning in federated learning
Identify adversarial clients without central test dataset
Defend against diverse attack types collaboratively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Noise-induced activation analysis
Autoencoder for anomaly detection
Robust against diverse attack types