🤖 AI Summary
This study systematically investigates the vulnerability of Vision Transformers (ViTs) to adversarial watermarking attacks on dermatological medical images, and comparatively analyzes their transferability and defense robustness against convolutional neural networks (CNNs). We employ Projected Gradient Descent (PGD) optimization to generate adversarial watermarks and conduct cross-architecture attacks between ViTs and CNNs. Our experiments reveal, for the first time in medical imaging, that ViTs exhibit significantly higher susceptibility: top-1 accuracy drops sharply to 27.6% under attack, whereas CNNs retain 68.3%. We further demonstrate that architecture-specific adversarial training substantially enhances ViT robustness—raising accuracy under attack to 90.0%—while preserving original performance on clean samples. These findings provide novel empirical evidence and methodological support for security evaluation and robustness enhancement of AI models in clinical dermatology applications.
📝 Abstract
Deep learning models have shown remarkable success in dermatological image analysis, offering potential for automated skin disease diagnosis. Previously, convolutional neural network(CNN) based architectures have achieved immense popularity and success in computer vision (CV) based task like skin image recognition, generation and video analysis. But with the emergence of transformer based models, CV tasks are now are nowadays carrying out using these models. Vision Transformers (ViTs) is such a transformer-based models that have shown success in computer vision. It uses self-attention mechanisms to achieve state-of-the-art performance across various tasks. However, their reliance on global attention mechanisms makes them susceptible to adversarial perturbations. This paper aims to investigate the susceptibility of ViTs for medical images to adversarial watermarking-a method that adds so-called imperceptible perturbations in order to fool models. By generating adversarial watermarks through Projected Gradient Descent (PGD), we examine the transferability of such attacks to CNNs and analyze the performance defense mechanism -- adversarial training. Results indicate that while performance is not compromised for clean images, ViTs certainly become much more vulnerable to adversarial attacks: an accuracy drop of as low as 27.6%. Nevertheless, adversarial training raises it up to 90.0%.