Benchmarking the Spatial Robustness of DNNs via Natural and Adversarial Localized Corruptions

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of evaluating spatial robustness of dense vision models (e.g., semantic segmentation) under localized natural and adversarial corruptions in safety-critical applications such as autonomous driving. We formally define and quantify “spatial robustness” for the first time, and propose a region-aware multi-attack adversarial analysis framework that transcends conventional single-point perturbation paradigms. Our analysis reveals complementary vulnerabilities between CNNs and Transformers: Transformers exhibit higher resilience to natural corruptions but greater susceptibility to adversarial attacks, whereas CNNs show the opposite pattern. Leveraging localized corruption modeling, multi-objective region-based attacks, and a novel spatial robustness metric, we establish a new benchmark. Extensive evaluation across 15 models demonstrates that our integrated defense strategy significantly enhances comprehensive robustness and reliability against both natural and adversarial spatial threats.

Technology Category

Application Category

📝 Abstract
The robustness of DNNs is a crucial factor in safety-critical applications, particularly in complex and dynamic environments where localized corruptions can arise. While previous studies have evaluated the robustness of semantic segmentation (SS) models under whole-image natural or adversarial corruptions, a comprehensive investigation into the spatial robustness of dense vision models under localized corruptions remained underexplored. This paper fills this gap by introducing specialized metrics for benchmarking the spatial robustness of segmentation models, alongside with an evaluation framework to assess the impact of localized corruptions. Furthermore, we uncover the inherent complexity of characterizing worst-case robustness using a single localized adversarial perturbation. To address this, we propose region-aware multi-attack adversarial analysis, a method that enables a deeper understanding of model robustness against adversarial perturbations applied to specific regions. The proposed metrics and analysis were evaluated on 15 segmentation models in driving scenarios, uncovering key insights into the effects of localized corruption in both natural and adversarial forms. The results reveal that models respond to these two types of threats differently; for instance, transformer-based segmentation models demonstrate notable robustness to localized natural corruptions but are highly vulnerable to adversarial ones and vice-versa for CNN-based models. Consequently, we also address the challenge of balancing robustness to both natural and adversarial localized corruptions by means of ensemble models, thereby achieving a broader threat coverage and improved reliability for dense vision tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluates spatial robustness of DNNs under localized natural and adversarial corruptions
Proposes metrics and framework to assess segmentation models' vulnerability to localized threats
Addresses balancing robustness between natural and adversarial corruptions via ensemble models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Specialized metrics for spatial robustness benchmarking
Region-aware multi-attack adversarial analysis method
Ensemble models balancing natural and adversarial robustness
🔎 Similar Papers
No similar papers found.