🤖 AI Summary
Adversarial patches pose severe threats to vision models such as object detectors, yet existing generation methods suffer from low efficiency and poor generalizability. This paper proposes the first incremental adversarial patch generation framework, which dynamically expands the vulnerability space covered by patches through iterative optimization and feature-distribution visualization analysis. Compared to baseline methods, our approach improves generation efficiency by 11.1× while significantly enhancing cross-model and cross-scenario transferability. Evaluated on YOLO-family detectors, the generated patches effectively expose diverse robustness deficiencies and support efficient adversarial training. Experiments demonstrate that models trained with this patch set achieve substantially improved interference resilience in high-stakes applications—including autonomous driving, security surveillance, and medical imaging—thereby establishing a scalable data foundation and technical pathway for proactive defense systems.
📝 Abstract
The advent of adversarial patches poses a significant challenge to the robustness of AI models, particularly in the domain of computer vision tasks such as object detection. In contradistinction to traditional adversarial examples, these patches target specific regions of an image, resulting in the malfunction of AI models. This paper proposes Incremental Patch Generation (IPG), a method that generates adversarial patches up to 11.1 times more efficiently than existing approaches while maintaining comparable attack performance. The efficacy of IPG is demonstrated by experiments and ablation studies including YOLO's feature distribution visualization and adversarial training results, which show that it produces well-generalized patches that effectively cover a broader range of model vulnerabilities. Furthermore, IPG-generated datasets can serve as a robust knowledge foundation for constructing a robust model, enabling structured representation, advanced reasoning, and proactive defenses in AI security ecosystems. The findings of this study suggest that IPG has considerable potential for future utilization not only in adversarial patch defense but also in real-world applications such as autonomous vehicles, security systems, and medical imaging, where AI models must remain resilient to adversarial attacks in dynamic and high-stakes environments.