🤖 AI Summary
In autonomous driving, federated learning (FL) is vulnerable to poisoning and inference attacks, leading to safety-critical misclassifications—e.g., traffic light errors. To address this, we propose a collaborative defense mechanism integrating secure aggregation with lightweight multi-party computation (MPC), ensuring gradient privacy during client-side local training and robustness against malicious client tampering during server-side model aggregation—balancing efficiency and security. Empirical evaluation on the LISA traffic light dataset demonstrates that our method reduces accuracy degradation under poisoning attacks by 62%, substantially mitigates parameter leakage from inference attacks, and maintains traffic light misclassification below 0.8%. This work constitutes the first systematic validation of MPC-enhanced secure aggregation in autonomous-driving FL settings, establishing its effectiveness and robustness. It provides a practical, deployable security framework for trustworthy collaborative learning in edge-intelligent systems.
📝 Abstract
Federated Learning lends itself as a promising paradigm in enabling distributed learning for autonomous vehicles applications and ensuring data privacy while enhancing and refining predictive model performance through collaborative training on edge client vehicles. However, it remains vulnerable to various categories of cyber-attacks, necessitating more robust security measures to effectively mitigate potential threats. Poisoning attacks and inference attacks are commonly initiated within the federated learning environment to compromise secure system performance. Secure aggregation can limit the disclosure of sensitive information from outsider and insider attackers of the federated learning environment. In this study, our aim is to conduct an empirical analysis on the transportation image dataset (e.g., LISA traffic light) using various secure aggregation techniques and multiparty computation in the presence of diverse categories of cyber-attacks. Multiparty computation serves as a state-of-the-art security mechanism, offering standard privacy for secure aggregation of edge autonomous vehicles local model updates through various security protocols. The presence of adversaries can mislead the autonomous vehicle learning model, leading to the misclassification of traffic lights, and resulting in detrimental impacts. This empirical study explores the resilience of various secure federated learning aggregation techniques and multiparty computation in safeguarding autonomous vehicle applications against various cyber threats during both training and inference times.