🤖 AI Summary
Federated learning in IoT environments is vulnerable to data poisoning attacks, yet existing detection methods lack standardization and trustworthy consensus mechanisms. To address this, we propose a blockchain-based decentralized defense framework. Our method enables clients to collaboratively construct and jointly validate lightweight discriminative models via Byzantine fault-tolerant consensus, thereby enabling distributed and verifiable detection of malicious model updates—without relying on any trusted third party. The framework seamlessly integrates federated learning, blockchain, and Byzantine-resilient consensus while preserving privacy and ensuring detection integrity. Experimental results demonstrate that our approach significantly enhances model robustness against diverse poisoning attacks, improving test accuracy by 12.7%–34.2%. Moreover, the discriminative model exhibits strong scalability and incurs low communication overhead, making it practical for resource-constrained IoT deployments.
📝 Abstract
Federated learning enhances traditional deep learning by enabling the joint training of a model with the use of IoT device's private data. It ensures privacy for clients, but is susceptible to data poisoning attacks during training that degrade model performance and integrity. Current poisoning detection methods in federated learning lack a standardized detection method or take significant liberties with trust. In this paper, we present Sys, a novel blockchain-enabled poison detection framework in federated learning. The framework decentralizes the role of the global server across participating clients. We introduce a judge model used to detect data poisoning in model updates. The judge model is produced by each client and verified to reach consensus on a single judge model. We implement our solution to show Sys is robust against data poisoning attacks and the creation of our judge model is scalable.