🤖 AI Summary
Autonomous driving traffic sign recognition systems are vulnerable to adversarial attacks. To address this security threat, this paper proposes an N-version machine learning–based safety enhancement framework. The core methodological contribution is the integration of Failure Mode and Effects Analysis (FMEA) to dynamically quantify the reliability of individual heterogeneous submodels under diverse adversarial attack scenarios, thereby enabling a safety-aware weighted soft voting mechanism. The framework incorporates multiple architecturally distinct models and rigorously evaluates robustness against adversarial examples generated via the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Experimental results demonstrate that, under both FGSM and PGD attacks, the proposed approach significantly improves classification accuracy (+12.7% to +23.4%) and robustness over baseline ensemble methods. It further enhances the system’s capability to respond to previously unseen attacks and ensures safer operational behavior in adversarial environments.
📝 Abstract
Autonomous driving is rapidly advancing as a key application of machine learning, yet ensuring the safety of these systems remains a critical challenge. Traffic sign recognition, an essential component of autonomous vehicles, is particularly vulnerable to adversarial attacks that can compromise driving safety. In this paper, we propose an N-version machine learning (NVML) framework that integrates a safety-aware weighted soft voting mechanism. Our approach utilizes Failure Mode and Effects Analysis (FMEA) to assess potential safety risks and assign dynamic, safety-aware weights to the ensemble outputs. We evaluate the robustness of three-version NVML systems employing various voting mechanisms against adversarial samples generated using the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks. Experimental results demonstrate that our NVML approach significantly enhances the robustness and safety of traffic sign recognition systems under adversarial conditions.