🤖 AI Summary
Existing physical-world backdoor attacks require label manipulation, rendering them vulnerable to human detection. To address this limitation, we propose Clean-Label Physical Backdoor Attack (CLPBA)—a novel paradigm that enables stealthy, real-time targeted misclassification without altering ground-truth labels, using natural objects as physical triggers. Methodologically, we introduce the first gradient-based poisoning algorithm explicitly modeling physical trigger characteristics; crucially, we perturb the embedding space via feature distribution alignment rather than relying on sample memorization, thereby bypassing the core assumption of “label contamination” underlying most existing defenses. We validate CLPBA on face recognition and animal classification tasks under realistic physical settings, demonstrating high stealthiness and strong cross-model transferability. Empirical evaluation shows that state-of-the-art defensive methods consistently fail against CLPBA, confirming its practical threat to deployed deep learning models.
📝 Abstract
Deep Neural Networks (DNNs) are shown to be vulnerable to backdoor poisoning attacks, with most research focusing on extbf{digital triggers} -- special patterns added to test-time inputs to induce targeted misclassification. extbf{Physical triggers}, natural objects within a physical scene, have emerged as a desirable alternative since they enable real-time backdoor activations without digital manipulation. However, current physical backdoor attacks require poisoned inputs to have incorrect labels, making them easily detectable by human inspection. In this paper, we explore a new paradigm of attacks, extbf{clean-label physical backdoor attacks (CLPBA)}, via experiments on facial recognition and animal classification tasks. Our study reveals that CLPBA could be a serious threat with the right poisoning algorithm and physical trigger. A key finding is that different from digital backdoor attacks which exploit memorization to plant backdoors in deep nets, CLPBA works by embedding the feature of the trigger distribution (i.e., the distribution of trigger samples) to the poisoned images through the perturbations. We also find that representative defenses cannot defend against CLPBA easily since CLPBA fundamentally breaks the core assumptions behind these defenses. Our study highlights accidental backdoor activations as a limitation of CLPBA, happening when unintended objects or classes cause the model to misclassify as the target class. The code and dataset can be found at https://github.com/21thinh/Clean-Label-Physical-Backdoor-Attacks.