Backdoor Attacks on Open Vocabulary Object Detectors via Multi-Modal Prompt Tuning

πŸ“… 2025-11-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work identifies a novel backdoor threat against open-vocabulary object detectors (OVODs) under multimodal prompt tuning. Unlike prior studies, it explicitly addresses the security vulnerability at the prompt layerβ€”a previously overlooked aspect. We propose the first stealthy prompt-tuning-oriented backdoor attack: jointly optimizing learnable text prompts and visual triggers, embedding small-patch triggers via image-text co-tuning; and incorporating a curriculum learning strategy to progressively shrink trigger size for efficient activation. The attack requires no model weight modification and supports both misclassification and object-removal objectives. Evaluated on COCO and LVIS, it achieves >95% attack success rates while simultaneously improving downstream detection performance on clean samples. Our approach provides a new perspective and practical tool for security assessment of multimodal foundation models.

Technology Category

Application Category

πŸ“ Abstract
Open-vocabulary object detectors (OVODs) unify vision and language to detect arbitrary object categories based on text prompts, enabling strong zero-shot generalization to novel concepts. As these models gain traction in high-stakes applications such as robotics, autonomous driving, and surveillance, understanding their security risks becomes crucial. In this work, we conduct the first study of backdoor attacks on OVODs and reveal a new attack surface introduced by prompt tuning. We propose TrAP (Trigger-Aware Prompt tuning), a multi-modal backdoor injection strategy that jointly optimizes prompt parameters in both image and text modalities along with visual triggers. TrAP enables the attacker to implant malicious behavior using lightweight, learnable prompt tokens without retraining the base model weights, thus preserving generalization while embedding a hidden backdoor. We adopt a curriculum-based training strategy that progressively shrinks the trigger size, enabling effective backdoor activation using small trigger patches at inference. Experiments across multiple datasets show that TrAP achieves high attack success rates for both object misclassification and object disappearance attacks, while also improving clean image performance on downstream datasets compared to the zero-shot setting.
Problem

Research questions and friction points this paper is trying to address.

Investigating backdoor attack vulnerabilities in open-vocabulary object detectors
Developing multi-modal prompt tuning method to implant hidden malicious behavior
Enabling effective backdoor activation with small trigger patches during inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal prompt tuning for backdoor injection
Joint optimization of image and text prompts
Curriculum training with shrinking trigger size
πŸ”Ž Similar Papers
No similar papers found.
Ankita Raj
Ankita Raj
Indian Institute of Technology Delhi
Computer VisionMachine learningOptimization
C
Chetan Arora
Indian Institute of Technology Delhi, New Delhi, India