Cross-Model Transferability of Adversarial Patches in Real-time Segmentation for Autonomous Driving

📅 2025-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the cross-architectural transferability of adversarial patches against real-time semantic segmentation models in autonomous driving, focusing on CNN-based (e.g., PIDNet) and ViT-based (e.g., SegFormer) architectures. We propose the first lightweight, differentiable Expectation Over Transformation (EOT)-based patch attack framework tailored for semantic segmentation, employing a simplified loss function to optimize patch generation. Extensive experiments on Cityscapes demonstrate that adversarial patches exhibit strong intra-architectural transferability but near-zero transferability across CNN↔ViT boundaries. Patch attacks induce localized misclassifications in CNNs, whereas in ViTs they often trigger global semantic errors. Notably, classes such as “sky” show markedly higher robustness. Our findings reveal that architectural disparities—particularly in inductive bias and receptive field characteristics—fundamentally constrain adversarial transferability in segmentation models. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Adversarial attacks pose a significant threat to deep learning models, particularly in safety-critical applications like healthcare and autonomous driving. Recently, patch based attacks have demonstrated effectiveness in real-time inference scenarios owing to their 'drag and drop' nature. Following this idea for Semantic Segmentation (SS), here we propose a novel Expectation Over Transformation (EOT) based adversarial patch attack that is more realistic for autonomous vehicles. To effectively train this attack we also propose a 'simplified' loss function that is easy to analyze and implement. Using this attack as our basis, we investigate whether adversarial patches once optimized on a specific SS model, can fool other models or architectures. We conduct a comprehensive cross-model transferability analysis of adversarial patches trained on SOTA Convolutional Neural Network (CNN) models such PIDNet-S, PIDNet-M and PIDNet-L, among others. Additionally, we also include the Segformer model to study transferability to Vision Transformers (ViTs). All of our analysis is conducted on the widely used Cityscapes dataset. Our study reveals key insights into how model architectures (CNN vs CNN or CNN vs. Transformer-based) influence attack susceptibility. In particular, we conclude that although the transferability (effectiveness) of attacks on unseen images of any dimension is really high, the attacks trained against one particular model are minimally effective on other models. And this was found to be true for both ViT and CNN based models. Additionally our results also indicate that for CNN-based models, the repercussions of patch attacks are local, unlike ViTs. Per-class analysis reveals that simple-classes like 'sky' suffer less misclassification than others. The code for the project is available at: https://github.com/p-shekhar/adversarial-patch-transferability
Problem

Research questions and friction points this paper is trying to address.

Study adversarial patch transferability across segmentation models.
Analyze attack effectiveness between CNN and Vision Transformer models.
Investigate local versus global effects of patch attacks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

EOT-based adversarial patch attack
Simplified loss function implementation
Cross-model transferability analysis
🔎 Similar Papers
No similar papers found.