Robust and Safe Traffic Sign Recognition using N-version with Weighted Voting

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous driving traffic sign recognition systems are vulnerable to adversarial attacks. To address this security threat, this paper proposes an N-version machine learning–based safety enhancement framework. The core methodological contribution is the integration of Failure Mode and Effects Analysis (FMEA) to dynamically quantify the reliability of individual heterogeneous submodels under diverse adversarial attack scenarios, thereby enabling a safety-aware weighted soft voting mechanism. The framework incorporates multiple architecturally distinct models and rigorously evaluates robustness against adversarial examples generated via the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Experimental results demonstrate that, under both FGSM and PGD attacks, the proposed approach significantly improves classification accuracy (+12.7% to +23.4%) and robustness over baseline ensemble methods. It further enhances the system’s capability to respond to previously unseen attacks and ensures safer operational behavior in adversarial environments.

Technology Category

Application Category

📝 Abstract
Autonomous driving is rapidly advancing as a key application of machine learning, yet ensuring the safety of these systems remains a critical challenge. Traffic sign recognition, an essential component of autonomous vehicles, is particularly vulnerable to adversarial attacks that can compromise driving safety. In this paper, we propose an N-version machine learning (NVML) framework that integrates a safety-aware weighted soft voting mechanism. Our approach utilizes Failure Mode and Effects Analysis (FMEA) to assess potential safety risks and assign dynamic, safety-aware weights to the ensemble outputs. We evaluate the robustness of three-version NVML systems employing various voting mechanisms against adversarial samples generated using the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks. Experimental results demonstrate that our NVML approach significantly enhances the robustness and safety of traffic sign recognition systems under adversarial conditions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing traffic sign recognition safety against adversarial attacks
Proposing N-version ML framework with weighted voting for robustness
Evaluating NVML resilience to FGSM and PGD adversarial samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

N-version machine learning framework
Safety-aware weighted soft voting
FMEA for dynamic risk assessment
🔎 Similar Papers
No similar papers found.