Towards Clean-Label Backdoor Attacks in the Physical World

📅 2024-07-27
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing physical-world backdoor attacks require label manipulation, rendering them vulnerable to human detection. To address this limitation, we propose Clean-Label Physical Backdoor Attack (CLPBA)—a novel paradigm that enables stealthy, real-time targeted misclassification without altering ground-truth labels, using natural objects as physical triggers. Methodologically, we introduce the first gradient-based poisoning algorithm explicitly modeling physical trigger characteristics; crucially, we perturb the embedding space via feature distribution alignment rather than relying on sample memorization, thereby bypassing the core assumption of “label contamination” underlying most existing defenses. We validate CLPBA on face recognition and animal classification tasks under realistic physical settings, demonstrating high stealthiness and strong cross-model transferability. Empirical evaluation shows that state-of-the-art defensive methods consistently fail against CLPBA, confirming its practical threat to deployed deep learning models.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks (DNNs) are shown to be vulnerable to backdoor poisoning attacks, with most research focusing on extbf{digital triggers} -- special patterns added to test-time inputs to induce targeted misclassification. extbf{Physical triggers}, natural objects within a physical scene, have emerged as a desirable alternative since they enable real-time backdoor activations without digital manipulation. However, current physical backdoor attacks require poisoned inputs to have incorrect labels, making them easily detectable by human inspection. In this paper, we explore a new paradigm of attacks, extbf{clean-label physical backdoor attacks (CLPBA)}, via experiments on facial recognition and animal classification tasks. Our study reveals that CLPBA could be a serious threat with the right poisoning algorithm and physical trigger. A key finding is that different from digital backdoor attacks which exploit memorization to plant backdoors in deep nets, CLPBA works by embedding the feature of the trigger distribution (i.e., the distribution of trigger samples) to the poisoned images through the perturbations. We also find that representative defenses cannot defend against CLPBA easily since CLPBA fundamentally breaks the core assumptions behind these defenses. Our study highlights accidental backdoor activations as a limitation of CLPBA, happening when unintended objects or classes cause the model to misclassify as the target class. The code and dataset can be found at https://github.com/21thinh/Clean-Label-Physical-Backdoor-Attacks.
Problem

Research questions and friction points this paper is trying to address.

Explores clean-label physical backdoor attacks in DNNs
Investigates real-time backdoor activations without digital manipulation
Challenges current defenses by breaking their core assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clean-label physical backdoor attacks (CLPBA)
Trigger distribution feature embedding
Breaks core defense assumptions
🔎 Similar Papers
No similar papers found.
T
Thinh Dao
College of Engineering & Computer Science, VinUniversity, Hanoi, Vietnam
Cuong Chi Le
Cuong Chi Le
FPT Software AI Center, University of Texas at Dallas
AI4SEMachine LearningLLMAutomated Software Engineering
K
Khoa D. Doan
College of Engineering & Computer Science, VinUniversity, Hanoi, Vietnam
K
Kok-Seng Wong
College of Engineering & Computer Science, VinUniversity, Hanoi, Vietnam