🤖 AI Summary
In 5G and edge networks, federated learning faces unknown malicious attacks—including identity spoofing, backdoor injection, and label flipping—whose characteristics are often unavailable a priori. To address this, we propose a prior-free, dual-path robust defense mechanism: (1) geometric anomaly detection, which measures model update distances to identify outliers in real time; and (2) momentum-driven dynamic reputation tracking, which accumulates historical client behavior and adaptively penalizes malicious participants to enforce long-term deterrence. These components synergistically enable dynamic update filtering and sustained adversarial suppression. Ablation studies confirm their strong complementarity. Evaluated on a proprietary 5G dataset and NF-CSE-CIC-IDS2018, our method achieves global model accuracies of 98.66% and 96.60%, respectively—significantly outperforming state-of-the-art aggregation schemes including Krum, Trimmed Mean, and Bulyan.
📝 Abstract
Federated Learning (FL) in 5G and edge network environments face severe security threats from adversarial clients. Malicious participants can perform label flipping, inject backdoor triggers, or launch Sybil attacks to corrupt the global model. This paper introduces Hybrid Reputation Aggregation (HRA), a novel robust aggregation mechanism designed to defend against diverse adversarial behaviors in FL without prior knowledge of the attack type. HRA combines geometric anomaly detection with momentum-based reputation tracking of clients. In each round, it detects outlier model updates via distance-based geometric analysis while continuously updating a trust score for each client based on historical behavior. This hybrid approach enables adaptive filtering of suspicious updates and long-term penalization of unreliable clients, countering attacks ranging from backdoor insertions to random noise Byzantine failures. We evaluate HRA on a large-scale proprietary 5G network dataset (3M+ records) and the widely used NF-CSE-CIC-IDS2018 benchmark under diverse adversarial attack scenarios. Experimental results reveal that HRA achieves robust global model accuracy of up to 98.66% on the 5G dataset and 96.60% on NF-CSE-CIC-IDS2018, outperforming state-of-the-art aggregators such as Krum, Trimmed Mean, and Bulyan by significant margins. Our ablation studies further demonstrate that the full hybrid system achieves 98.66% accuracy, while the anomaly-only and reputation-only variants drop to 84.77% and 78.52%, respectively, validating the synergistic value of our dual-mechanism approach. This demonstrates HRA's enhanced resilience and robustness in 5G/edge federated learning deployments, even under significant adversarial conditions.