π€ AI Summary
This work addresses the vulnerability of deep automatic modulation classifiers in wireless communications to stealthy physical backdoor attacks, a threat largely overlooked by existing defenses. The authors propose a novel physical-domain backdoor attack tailored to wireless signals, introducing explainable artificial intelligence (XAI) into automatic modulation classification for the first time. By leveraging XAI to identify the most vulnerable signal regions and combining class prototypes with principal component analysis, they generate highly efficient, low-overhead triggers. Remarkably, the attack achieves high success rates across varying signal-to-noise ratios with only minimal data poisoning. This study exposes critical security blind spots in current models under realistic wireless conditions and underscores the pivotal role of XAI not only in understanding model vulnerabilities but also in designing and evaluating physically realizable backdoor attacks.
π Abstract
Deep learning (DL) has been widely studied for assisting applications of modern wireless communications. One of the applications is automatic modulation classification (AMC). However, DL models are found to be vulnerable to adversarial machine learning (AML) threats. One of the most persistent and stealthy threats is the backdoor (Trojan) attack. Nevertheless, most studied threats originate from other AI domains, such as computer vision (CV). Therefore, in this paper, a physical backdoor attack targeting the wireless signal before transmission is studied. The adversary is considered to be using explainable AI (XAI) to guide the placement of the trigger in the most vulnerable parts of the signal. Then, a class prototype combined with principal components is used to generate the trigger. The studied threat was found to be efficient in breaching multiple DL-based AMC models. The attack achieves high success rates for a wide range of SNR values and a small poisoning ratio.