🤖 AI Summary
This work addresses the insufficient stealthiness of backdoor attacks against multimodal models such as CLIP. We propose Concept Confusion Attack (C²Attack), a representation-level backdoor method that operates without explicit input triggers. Its core innovation lies in bypassing pixel-space perturbations and instead performing gradient-guided conceptual intervention coupled with adversarial concept masking within CLIP’s joint embedding space—explicitly confusing human-interpretable semantic concepts to implicitly activate the backdoor. This paradigm shift departs fundamentally from conventional trigger-based approaches, significantly enhancing evasion capability against state-of-the-art detection methods (e.g., ANP, FC) and fine-tuning-based defenses. Extensive experiments demonstrate that C²Attack achieves over 92% attack success rate across multiple CLIP variants while maintaining a detection rate below 5%.
📝 Abstract
Backdoor attacks pose a significant threat to deep learning models, enabling adversaries to embed hidden triggers that manipulate the behavior of the model during inference. Traditional backdoor attacks typically rely on inserting explicit triggers (e.g., external patches, or perturbations) into input data, but they often struggle to evade existing defense mechanisms. To address this limitation, we investigate backdoor attacks through the lens of the reasoning process in deep learning systems, drawing insights from interpretable AI. We conceptualize backdoor activation as the manipulation of learned concepts within the model's latent representations. Thus, existing attacks can be seen as implicit manipulations of these activated concepts during inference. This raises interesting questions: why not manipulate the concepts explicitly? This idea leads to our novel backdoor attack framework, Concept Confusion Attack (C^2 ATTACK), which leverages internal concepts in the model's reasoning as"triggers"without introducing explicit external modifications. By avoiding the use of real triggers and directly activating or deactivating specific concepts in latent spaces, our approach enhances stealth, making detection by existing defenses significantly harder. Using CLIP as a case study, experimental results demonstrate the effectiveness of C^2 ATTACK, achieving high attack success rates while maintaining robustness against advanced defenses.