Physical Backdoor Attack Against Deep Learning-Based Modulation Classification

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of deep learning–based modulation classification models to physical-layer backdoor attacks, which remain undetected by conventional digital-domain defenses. The study pioneers the extension of backdoor attacks from the digital domain into the physical radio frequency (RF) domain by exploiting nonlinear distortions introduced by power amplifiers to embed stealthy triggers in RF signals. During training, the attacker manipulates signal amplitudes and relabels samples, causing the model to misclassify specific adversarially crafted inputs during inference. By integrating power amplifier nonlinearity modeling, deep modulation classification, and adversarial example generation, the proposed method achieves high attack success rates under diverse noise conditions with only a small number of poisoned samples, effectively evading state-of-the-art defense mechanisms.

Technology Category

Application Category

📝 Abstract
Deep Learning (DL) has become a key technology that assists radio frequency (RF) signal classification applications, such as modulation classification. However, the DL models are vulnerable to adversarial machine learning threats, such as data manipulation attacks. We study a physical backdoor (Trojan) attack that targets a DL-based modulation classifier. In contrast to digital backdoor attacks, where digital triggers are injected into the training dataset, we use power amplifier (PA) non-linear distortions to create physical triggers before the dataset is formed. During training, the adversary manipulates amplitudes of RF signals and changes their labels to a target modulation scheme, training a backdoored model. At inference, the adversary aims to keep the backdoor attack inactive such that the backdoored model maintains high accuracy on test signals. However, if they apply the same manipulation used during training on these test signals, the backdoor is activated, and the model misclassifies these signals. We demonstrate that our proposed attack achieves high attack success rates with few manipulated RD signals for different noise levels. Furthermore, we test the resilience of the proposed attack to multiple defense techniques, and the results show that these techniques fail to mitigate the attack.
Problem

Research questions and friction points this paper is trying to address.

Physical Backdoor Attack
Modulation Classification
Deep Learning
Adversarial Machine Learning
Power Amplifier Non-linearities
Innovation

Methods, ideas, or system contributions that make the work stand out.

physical backdoor attack
power amplifier nonlinearity
modulation classification
adversarial machine learning
RF signal manipulation
🔎 Similar Papers
No similar papers found.
Y
Younes Salmi
Institute of Radiocommunications, Poznan University of Technology, Poznań, Poland
Hanna Bogucka
Hanna Bogucka
Poznan University of Technology
Telecommunication