On the Vulnerability of Deep Automatic Modulation Classifiers to Explainable Backdoor Threats

πŸ“… 2026-03-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of deep automatic modulation classifiers in wireless communications to stealthy physical backdoor attacks, a threat largely overlooked by existing defenses. The authors propose a novel physical-domain backdoor attack tailored to wireless signals, introducing explainable artificial intelligence (XAI) into automatic modulation classification for the first time. By leveraging XAI to identify the most vulnerable signal regions and combining class prototypes with principal component analysis, they generate highly efficient, low-overhead triggers. Remarkably, the attack achieves high success rates across varying signal-to-noise ratios with only minimal data poisoning. This study exposes critical security blind spots in current models under realistic wireless conditions and underscores the pivotal role of XAI not only in understanding model vulnerabilities but also in designing and evaluating physically realizable backdoor attacks.

Technology Category

Application Category

πŸ“ Abstract
Deep learning (DL) has been widely studied for assisting applications of modern wireless communications. One of the applications is automatic modulation classification (AMC). However, DL models are found to be vulnerable to adversarial machine learning (AML) threats. One of the most persistent and stealthy threats is the backdoor (Trojan) attack. Nevertheless, most studied threats originate from other AI domains, such as computer vision (CV). Therefore, in this paper, a physical backdoor attack targeting the wireless signal before transmission is studied. The adversary is considered to be using explainable AI (XAI) to guide the placement of the trigger in the most vulnerable parts of the signal. Then, a class prototype combined with principal components is used to generate the trigger. The studied threat was found to be efficient in breaching multiple DL-based AMC models. The attack achieves high success rates for a wide range of SNR values and a small poisoning ratio.
Problem

Research questions and friction points this paper is trying to address.

backdoor attack
automatic modulation classification
explainable AI
deep learning
wireless communications
Innovation

Methods, ideas, or system contributions that make the work stand out.

backdoor attack
explainable AI
automatic modulation classification
physical-layer security
adversarial machine learning
Y
Younes Salmi
Institute of Radiocommunications, Poznan University of Technology, PoznaΕ„, Poland
Hanna Bogucka
Hanna Bogucka
Poznan University of Technology
Telecommunication