On the Adversarial Vulnerabilities of Transfer Learning in Remote Sensing

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Remote sensing pre-trained models exhibit significant vulnerability to adversarial attacks during cross-domain transfer recognition, exposing critical security weaknesses. To address this, we propose a neuron-level adversarial manipulation method that requires neither target-domain data nor prior domain knowledge: it first identifies fragile neurons via sensitivity analysis, then employs gradient-guided perturbation optimization to precisely activate or suppress individual or multiple neurons, thereby generating highly transferable adversarial examples. This work introduces the first “low-access, domain-knowledge-free” neuron-level attack paradigm and systematically uncovers previously unrecognized security threats posed by generic pre-trained models in remote sensing cross-domain tasks. Extensive evaluations on mainstream architectures (e.g., ResNet, ViT) and benchmark datasets (e.g., UCMerced, AID) demonstrate attack success rates exceeding 90%, with perturbation magnitudes reduced by 40% compared to conventional black-box attacks—marking substantial improvements in both efficacy and stealth.

Technology Category

Application Category

📝 Abstract
The use of pretrained models from general computer vision tasks is widespread in remote sensing, significantly reducing training costs and improving performance. However, this practice also introduces vulnerabilities to downstream tasks, where publicly available pretrained models can be used as a proxy to compromise downstream models. This paper presents a novel Adversarial Neuron Manipulation method, which generates transferable perturbations by selectively manipulating single or multiple neurons in pretrained models. Unlike existing attacks, this method eliminates the need for domain-specific information, making it more broadly applicable and efficient. By targeting multiple fragile neurons, the perturbations achieve superior attack performance, revealing critical vulnerabilities in deep learning models. Experiments on diverse models and remote sensing datasets validate the effectiveness of the proposed method. This low-access adversarial neuron manipulation technique highlights a significant security risk in transfer learning models, emphasizing the urgent need for more robust defenses in their design when addressing the safety-critical remote sensing tasks.
Problem

Research questions and friction points this paper is trying to address.

Adversarial Attacks
Pre-trained Models
Remote Sensing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial Neuron Manipulation
Robustness in Deep Learning
Remote Sensing Image Recognition
🔎 Similar Papers
No similar papers found.
T
Tao Bai
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
X
Xingjian Tian
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
Yonghao Xu
Yonghao Xu
Linköping University
Remote SensingComputer VisionMachine Learning
Bihan Wen
Bihan Wen
Associate Professor, Nanyang Technological University
Machine LearningImage ProcessingComputational ImagingComputer VisionTrustworthy AI