CertMask: Certifiable Defense Against Adversarial Patches via Theoretically Optimal Mask Coverage

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adversarial patch attacks mislead deep vision models via localized physical perturbations, posing tangible real-world security threats. This paper proposes CertMask, a provably robust defense grounded in covering theory: it introduces the first *k*-fold binary mask set, enabling single-pass inference to provably thwart patch attacks at arbitrary locations. Theoretically, CertMask guarantees *O(n)* construction complexity—eliminating one masking round compared to PatchCleanser—and thus achieves superior efficiency and certified robustness. Its design rigorously formalizes mask coverage relations, jointly optimizing certified security and forward-inference simplicity. Evaluated on standard benchmarks including ImageNet, CertMask improves certified robust accuracy by up to 13.4% over prior methods while preserving the base model’s clean-input accuracy.

Technology Category

Application Category

📝 Abstract
Adversarial patch attacks inject localized perturbations into images to mislead deep vision models. These attacks can be physically deployed, posing serious risks to real-world applications. In this paper, we propose CertMask, a certifiably robust defense that constructs a provably sufficient set of binary masks to neutralize patch effects with strong theoretical guarantees. While the state-of-the-art approach (PatchCleanser) requires two rounds of masking and incurs $O(n^2)$ inference cost, CertMask performs only a single round of masking with $O(n)$ time complexity, where $n$ is the cardinality of the mask set to cover an input image. Our proposed mask set is computed using a mathematically rigorous coverage strategy that ensures each possible patch location is covered at least $k$ times, providing both efficiency and robustness. We offer a theoretical analysis of the coverage condition and prove its sufficiency for certification. Experiments on ImageNet, ImageNette, and CIFAR-10 show that CertMask improves certified robust accuracy by up to +13.4% over PatchCleanser, while maintaining clean accuracy nearly identical to the vanilla model.
Problem

Research questions and friction points this paper is trying to address.

Defending deep vision models against localized adversarial patch attacks
Providing certifiable robustness with efficient single-round masking strategy
Ensuring theoretical patch coverage guarantees while maintaining model accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

CertMask uses single-round masking for efficiency
It employs mathematically rigorous coverage strategy
The method ensures each patch location covered multiple times