Brain Tumor Classifiers Under Attack: Robustness of ResNet Variants Against Transferable FGSM and PGD Attacks

📅 2025-11-06
🏛️ International Conferences on Biological Information and Biomedical Engineering
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the insufficient robustness of deep learning models for brain tumor classification against transferable adversarial attacks—such as FGSM and PGD—in clinical MRI settings. It systematically evaluates three ResNet variants (BrainNet, BrainNeXt, and DilationNet) under diverse MRI preprocessing strategies in black-box attack scenarios. The work reveals, for the first time, a connection between model architecture—particularly cardinality—and adversarial transferability. It further identifies a trade-off between input resolution and adversarial vulnerability: low-resolution, non-augmented inputs substantially degrade robustness despite maintaining high clean-sample accuracy. Experimental results demonstrate that BrainNeXt exhibits the strongest robustness under black-box attacks, although its generated adversarial examples show weaker transferability compared to other architectures.

Technology Category

Application Category

📝 Abstract
Adversarial robustness in deep learning models for brain tumor classification remains an underexplored yet critical challenge, particularly for clinical deployment scenarios involving MRI data. In this work, we investigate the susceptibility and resilience of several ResNet-based architectures, referred to as BrainNet, BrainNeXt and DilationNet, against gradient-based adversarial attacks, namely FGSM and PGD. These models, based on ResNet, ResNeXt, and dilated ResNet variants respectively, are evaluated across three preprocessing configurations (i) full-sized augmented, (ii) shrunk augmented and (iii) shrunk non-augmented MRI datasets. Our experiments reveal that BrainNeXt models exhibit the highest robustness to black-box attacks, likely due to their increased cardinality, though they produce weaker transferable adversarial samples. In contrast, BrainNet and Dilation models are more vulnerable to attacks from each other, especially under PGD with higher iteration steps and $\alpha$ values. Notably, shrunk and non-augmented data significantly reduce model resilience, even when the untampered test accuracy remains high, highlighting a key trade-off between input resolution and adversarial vulnerability. These results underscore the importance of jointly evaluating classification performance and adversarial robustness for reliable real-world deployment in brain MRI analysis.
Problem

Research questions and friction points this paper is trying to address.

adversarial robustness
brain tumor classification
ResNet variants
FGSM
PGD
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial robustness
ResNet variants
brain tumor classification
transferable attacks
MRI preprocessing
🔎 Similar Papers
No similar papers found.
R
Ryan Deem
Department of Computer Science, Kennesaw State University, Marietta, GA, USA
G
Garrett Goodman
Computer Science and Software Engineering, Miami University, Oxford, OH, USA
Waqas Majeed
Waqas Majeed
Siemens Medical Solutions USA Inc.
MRI
M
Md Abdullah Al Hafiz Khan
Department of Computer Science, Kennesaw State University, Marietta, GA, USA
M
Michail S. Alexiou
Department of Computer Science, Kennesaw State University, Marietta, GA, USA