PatchGuard: Adversarially Robust Anomaly Detection and Localization through Vision Transformers and Pseudo Anomalies

๐Ÿ“… 2025-06-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing anomaly detection (AD) and localization (AL) methods rely exclusively on normal samples for training, resulting in poor adversarial robustness and insufficient reliability for safety-critical medical and industrial applications. To address this, we propose PatchGuardโ€”a ViT-based framework introducing the first foreground-aware pseudo-anomaly generation paradigm, supervised by localization masks, and theoretically establishing an intrinsic connection between ViTโ€™s self-attention mechanism and adversarial robustness. Furthermore, we design a novel adversarial loss function that jointly optimizes localization accuracy and robustness. Evaluated on benchmark industrial and medical datasets, PatchGuard improves AD and AL performance under adversarial attacks by 53.2% and 68.5%, respectively, while maintaining state-of-the-art accuracy in clean (non-adversarial) settings.

Technology Category

Application Category

๐Ÿ“ Abstract
Anomaly Detection (AD) and Anomaly Localization (AL) are crucial in fields that demand high reliability, such as medical imaging and industrial monitoring. However, current AD and AL approaches are often susceptible to adversarial attacks due to limitations in training data, which typically include only normal, unlabeled samples. This study introduces PatchGuard, an adversarially robust AD and AL method that incorporates pseudo anomalies with localization masks within a Vision Transformer (ViT)-based architecture to address these vulnerabilities. We begin by examining the essential properties of pseudo anomalies, and follow it by providing theoretical insights into the attention mechanisms required to enhance the adversarial robustness of AD and AL systems. We then present our approach, which leverages Foreground-Aware Pseudo-Anomalies to overcome the deficiencies of previous anomaly-aware methods. Our method incorporates these crafted pseudo-anomaly samples into a ViT-based framework, with adversarial training guided by a novel loss function designed to improve model robustness, as supported by our theoretical analysis. Experimental results on well-established industrial and medical datasets demonstrate that PatchGuard significantly outperforms previous methods in adversarial settings, achieving performance gains of $53.2%$ in AD and $68.5%$ in AL, while also maintaining competitive accuracy in non-adversarial settings. The code repository is available at https://github.com/rohban-lab/PatchGuard .
Problem

Research questions and friction points this paper is trying to address.

Enhances adversarial robustness in anomaly detection and localization
Addresses vulnerabilities in training data with pseudo anomalies
Improves performance in medical and industrial adversarial settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Vision Transformers for anomaly detection
Incorporates pseudo anomalies with localization masks
Adversarial training with novel loss function
๐Ÿ”Ž Similar Papers
No similar papers found.