DEFEND: Poisoned Model Detection and Malicious Client Exclusion Mechanism for Secure Federated Learning-based Road Condition Classification

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL)-based road condition classification (RCC) for intelligent transportation systems, targeted label-flipping attacks (TLFAs) pose a critical security threat—specifically, maliciously misclassifying hazardous conditions (e.g., slippery roads) as safe ones (e.g., dry roads), thereby compromising traffic safety. To address this, we propose DEFEND, a novel defense mechanism. Its core contributions are twofold: (1) it introduces the first neuron-level activation magnitude analysis integrated with Gaussian mixture model (GMM) clustering to precisely identify TLFA-targeted clients; and (2) it designs a dynamic client scoring and adaptive aggregation scheme that actively excludes compromised participants post-detection. Extensive experiments across multiple FL-RCC benchmarks demonstrate that DEFEND consistently outperforms seven state-of-the-art defense baselines, achieving ≥15.78% higher classification accuracy and fully recovering model performance to the clean (attack-free) baseline level under TLFA, thereby ensuring both model robustness and transportation safety.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has drawn the attention of the Intelligent Transportation Systems (ITS) community. FL can train various models for ITS tasks, notably camera-based Road Condition Classification (RCC), in a privacy-preserving collaborative way. However, opening up to collaboration also opens FL-based RCC systems to adversaries, i.e., misbehaving participants that can launch Targeted Label-Flipping Attacks (TLFAs) and threaten transportation safety. Adversaries mounting TLFAs poison training data to misguide model predictions, from an actual source class (e.g., wet road) to a wrongly perceived target class (e.g., dry road). Existing countermeasures against poisoning attacks cannot maintain model performance under TLFAs close to the performance level in attack-free scenarios, because they lack specific model misbehavior detection for TLFAs and neglect client exclusion after the detection. To close this research gap, we propose DEFEND, which includes a poisoned model detection strategy that leverages neuron-wise magnitude analysis for attack goal identification and Gaussian Mixture Model (GMM)-based clustering. DEFEND discards poisoned model contributions in each round and adapts accordingly client ratings, eventually excluding malicious clients. Extensive evaluation involving various FL-RCC models and tasks shows that DEFEND can thwart TLFAs and outperform seven baseline countermeasures, with at least 15.78% improvement, with DEFEND remarkably achieving under attack the same performance as in attack-free scenarios.
Problem

Research questions and friction points this paper is trying to address.

Detects poisoned models in federated learning for road classification.
Excludes malicious clients to maintain model performance under attacks.
Thwarts targeted label-flipping attacks to ensure transportation safety.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuron-wise magnitude analysis for attack identification
Gaussian Mixture Model clustering to detect poisoned models
Client exclusion mechanism to discard malicious contributions
🔎 Similar Papers
No similar papers found.
S
Sheng Liu
Networked Systems Security Group, KTH Royal Institute of Technology, Stockholm, Sweden
Panos Papadimitratos
Panos Papadimitratos
KTH (Royal Institute of Technology)
SecurityPrivacyNetworkingWireless communications