🤖 AI Summary
This work addresses three key limitations of existing black-box adversarial attacks against object detectors: poor interpretability, perceptible perturbations, and strong architectural dependency. We propose BlackCAtt—the first causal pixel-based black-box attack method. Leveraging causal discovery algorithms, BlackCAtt identifies the minimal set of pixels that exert sufficient causal influence on detection outputs, without requiring access to model architecture or parameters. This enables cross-architecture, low-distortion, and highly imperceptible adversarial examples. Its core innovation lies in integrating causal inference into adversarial attack design, thereby achieving both attack interpretability and explicit revelation of underlying causal mechanisms. Evaluated on the COCO test set, BlackCAtt achieves attack success rates 2.7×, 3.86×, and 5.75× higher than baseline methods for bounding box deletion, modification, and insertion tasks, respectively, while inducing negligible visual distortion—preserving near-original image fidelity.
📝 Abstract
Adversarial perturbations are a useful way to expose vulnerabilities in object detectors. Existing perturbation methods are frequently white-box and architecture specific. More importantly, while they are often successful, it is rarely clear why they work. Insights into the mechanism of this success would allow developers to understand and analyze these attacks, as well as fine-tune the model to prevent them. This paper presents BlackCAtt, a black-box algorithm and a tool, which uses minimal, causally sufficient pixel sets to construct explainable, imperceptible, reproducible, architecture-agnostic attacks on object detectors. BlackCAtt combines causal pixels with bounding boxes produced by object detectors to create adversarial attacks that lead to the loss, modification or addition of a bounding box. BlackCAtt works across different object detectors of different sizes and architectures, treating the detector as a black box. We compare the performance of BlackCAtt with other black-box attack methods and show that identification of causal pixels leads to more precisely targeted and less perceptible attacks. On the COCO test dataset, our approach is 2.7 times better than the baseline in removing a detection, 3.86 times better in changing a detection, and 5.75 times better in triggering new, spurious, detections. The attacks generated by BlackCAtt are very close to the original image, and hence imperceptible, demonstrating the power of causal pixels.