Quantum Computing Supported Adversarial Attack-Resilient Autonomous Vehicle Perception Module for Traffic Sign Classification

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of autonomous vehicle perception modules to adversarial attacks (e.g., PGD, FGSM, GA) in traffic sign classification, posing critical safety risks. To enhance robustness, we propose a Hybrid Classical-Quantum Deep Learning (HCQ-DL) perception architecture that integrates AlexNet or VGG-16 feature extractors with trainable quantum circuits comprising ~100 parameters. Leveraging transfer learning and a hybrid classical-quantum training paradigm, the framework enables end-to-end robust classification. Experiments demonstrate >95% accuracy under clean conditions; robustness degrades gracefully to >91% under FGSM and GA attacks, and remains at 85% under stronger PGD attacks—substantially outperforming classical baselines (<21%). This study constitutes the first empirical validation of medium-scale trainable quantum circuits for improving adversarial robustness in real-world traffic scenarios, establishing a novel pathway toward quantum-enhanced autonomous driving perception.

Technology Category

Application Category

📝 Abstract
Deep learning (DL)-based image classification models are essential for autonomous vehicle (AV) perception modules since incorrect categorization might have severe repercussions. Adversarial attacks are widely studied cyberattacks that can lead DL models to predict inaccurate output, such as incorrectly classified traffic signs by the perception module of an autonomous vehicle. In this study, we create and compare hybrid classical-quantum deep learning (HCQ-DL) models with classical deep learning (C-DL) models to demonstrate robustness against adversarial attacks for perception modules. Before feeding them into the quantum system, we used transfer learning models, alexnet and vgg-16, as feature extractors. We tested over 1000 quantum circuits in our HCQ-DL models for projected gradient descent (PGD), fast gradient sign attack (FGSA), and gradient attack (GA), which are three well-known untargeted adversarial approaches. We evaluated the performance of all models during adversarial attacks and no-attack scenarios. Our HCQ-DL models maintain accuracy above 95% during a no-attack scenario and above 91% for GA and FGSA attacks, which is higher than C-DL models. During the PGD attack, our alexnet-based HCQ-DL model maintained an accuracy of 85% compared to C-DL models that achieved accuracies below 21%. Our results highlight that the HCQ-DL models provide improved accuracy for traffic sign classification under adversarial settings compared to their classical counterparts.
Problem

Research questions and friction points this paper is trying to address.

Enhancing AV perception resilience against adversarial attacks
Comparing hybrid quantum-classical vs classical DL robustness
Improving traffic sign classification accuracy under cyberattacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid classical-quantum deep learning models
Transfer learning for feature extraction
Robust against adversarial attacks