🤖 AI Summary
This work addresses the vulnerability of edge AI systems—such as those in autonomous driving and surveillance—to adversarial patch attacks, where small, localized perturbations can cause severe model misclassification. To counter this, the authors propose PatchBlock, a lightweight, model- and attack-agnostic preprocessing defense framework. Deployed at the sensor end, PatchBlock employs a three-stage pipeline: image tiling, an enhanced isolation forest anomaly detector based on object-aware segmentation, and dimensionality reduction to effectively identify and suppress adversarial patches. Leveraging CPU-based parallel computation, it avoids GPU overhead while maintaining efficiency. Experimental results demonstrate that PatchBlock seamlessly integrates into existing edge AI pipelines, recovering on average 77% of the accuracy lost under attack across diverse models, datasets, and hardware platforms, with minimal computational cost, low energy consumption, and negligible performance degradation on clean inputs.
📝 Abstract
Adversarial attacks pose a significant challenge to the reliable deployment of machine learning models in EdgeAI applications, such as autonomous driving and surveillance, which rely on resource-constrained devices for real-time inference. Among these, patch-based adversarial attacks, where small malicious patches (e.g., stickers) are applied to objects, can deceive neural networks into making incorrect predictions with potentially severe consequences. In this paper, we present PatchBlock, a lightweight framework designed to detect and neutralize adversarial patches in images. Leveraging outlier detection and dimensionality reduction, PatchBlock identifies regions affected by adversarial noise and suppresses their impact. It operates as a pre-processing module at the sensor level, efficiently running on CPUs in parallel with GPU inference, thus preserving system throughput while avoiding additional GPU overhead. The framework follows a three-stage pipeline: splitting the input into chunks (Chunking), detecting anomalous regions via a redesigned isolation forest with targeted cuts for faster convergence (Separating), and applying dimensionality reduction on the identified outliers (Mitigating). PatchBlock is both model- and patch-agnostic, can be retrofitted to existing pipelines, and integrates seamlessly between sensor inputs and downstream models. Evaluations across multiple neural architectures, benchmark datasets, attack types, and diverse edge devices demonstrate that PatchBlock consistently improves robustness, recovering up to 77% of model accuracy under strong patch attacks such as the Google Adversarial Patch, while maintaining high portability and minimal clean accuracy loss. Additionally, PatchBlock outperforms the state-of-the-art defenses in efficiency, in terms of computation time and energy consumption per sample, making it suitable for EdgeAI applications.