Exploring Adversarial Watermarking in Transformer-Based Models: Transferability and Robustness Against Defense Mechanism for Medical Images

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the vulnerability of Vision Transformers (ViTs) to adversarial watermarking attacks on dermatological medical images, and comparatively analyzes their transferability and defense robustness against convolutional neural networks (CNNs). We employ Projected Gradient Descent (PGD) optimization to generate adversarial watermarks and conduct cross-architecture attacks between ViTs and CNNs. Our experiments reveal, for the first time in medical imaging, that ViTs exhibit significantly higher susceptibility: top-1 accuracy drops sharply to 27.6% under attack, whereas CNNs retain 68.3%. We further demonstrate that architecture-specific adversarial training substantially enhances ViT robustness—raising accuracy under attack to 90.0%—while preserving original performance on clean samples. These findings provide novel empirical evidence and methodological support for security evaluation and robustness enhancement of AI models in clinical dermatology applications.

Technology Category

Application Category

📝 Abstract
Deep learning models have shown remarkable success in dermatological image analysis, offering potential for automated skin disease diagnosis. Previously, convolutional neural network(CNN) based architectures have achieved immense popularity and success in computer vision (CV) based task like skin image recognition, generation and video analysis. But with the emergence of transformer based models, CV tasks are now are nowadays carrying out using these models. Vision Transformers (ViTs) is such a transformer-based models that have shown success in computer vision. It uses self-attention mechanisms to achieve state-of-the-art performance across various tasks. However, their reliance on global attention mechanisms makes them susceptible to adversarial perturbations. This paper aims to investigate the susceptibility of ViTs for medical images to adversarial watermarking-a method that adds so-called imperceptible perturbations in order to fool models. By generating adversarial watermarks through Projected Gradient Descent (PGD), we examine the transferability of such attacks to CNNs and analyze the performance defense mechanism -- adversarial training. Results indicate that while performance is not compromised for clean images, ViTs certainly become much more vulnerable to adversarial attacks: an accuracy drop of as low as 27.6%. Nevertheless, adversarial training raises it up to 90.0%.
Problem

Research questions and friction points this paper is trying to address.

Investigating adversarial watermarking vulnerability in Vision Transformers for medical images
Assessing transferability of adversarial attacks from ViTs to CNNs
Evaluating robustness of adversarial training defense mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial watermarking in Vision Transformers
Transferability analysis using Projected Gradient Descent
Adversarial training boosts robustness significantly
🔎 Similar Papers
2024-03-22arXiv.orgCitations: 8
Rifat Sadik
Rifat Sadik
University of Delaware
Edge ComputingMachine LearningDeep LearningComputer VisionNatural Language Processing
T
Tanvir Rahman
Computer and Information Sciences, University of Delaware, Newark, 19711, Delaware, USA.
A
Arpan Bhattacharjee
Computer and Information Sciences, University of Delaware, Newark, 19711, Delaware, USA.
B
Bikash Chandra Halder
Computer Science and Engineering, Jahangirnagar University, Dhaka, 1342, Bangladesh.
Ismail Hossain
Ismail Hossain
PhD Student (Research Assistant), Computer Science, University of Texas at El Paso, USA.
Security & PrivacyMachine LearningAI for Social GoodSocial Networks