🤖 AI Summary
Federated learning (FL) is vulnerable to model poisoning attacks launched by malicious participants, compromising global model robustness and training integrity. To address this, we propose a poisoning-resilient FL framework tailored for edge computing, which innovatively integrates dynamic trust scoring, device-level blockchain-based token authentication, and consensus-driven reputation verification into the FL workflow—enabling real-time identification and automatic exclusion of malicious nodes. The framework combines smart contracts, contribution-aware reputation modeling, Median Absolute Deviation (MAD)-based statistical anomaly detection, distributed device-aware coordination, and a hybrid Proof-of-Work/Proof-of-Stake (PoW/PoS) consensus mechanism. Evaluated on CIFAR-10 and Fashion-MNIST, our approach mitigates 98.3% of poisoning attacks, improves global model accuracy by 12.7%, achieves an F1-score of 0.96 for malicious node detection, and maintains on-chain verification latency below 150 ms. The framework significantly enhances FL security, fairness, and auditability.
📝 Abstract
Federated Learning (FL) is a privacy-preserving distributed machine learning scheme, where each participant data remains on the participating devices and only the local model generated utilizing the local computational power is transmitted throughout the database. However, the distributed computational nature of FL creates the necessity to develop a mechanism that can remotely trigger any network agents, track their activities, and prevent threats to the overall process posed by malicious participants. Particularly, the FL paradigm may become vulnerable due to an active attack from the network participants, called a poisonous attack. In such an attack, the malicious participant acts as a benign agent capable of affecting the global model quality by uploading an obfuscated poisoned local model update to the server. This paper presents a cross-device FL model that ensures trustworthiness, fairness, and authenticity in the underlying FL training process. We leverage trustworthiness by constructing a reputation-based trust model based on contributions of agents toward model convergence. We ensure fairness by identifying and removing malicious agents from the training process through an outlier detection technique. Further, we establish authenticity by generating a token for each participating device through a distributed sensing mechanism and storing that unique token in a blockchain smart contract. Further, we insert the trust scores of all agents into a blockchain and validate their reputations using various consensus mechanisms that consider the computational task.