Autoencoder-based Denoising Defense against Adversarial Attacks on Object Detection

πŸ“… 2025-12-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Target detection models often suffer severe performance degradation under adversarial perturbations such as Perlin noise. To address this, we propose a lightweight, plug-and-play denoising defense based on a single-layer convolutional autoencoder. Our method requires no access to the detector’s internal parameters or retraining; instead, it performs end-to-end learning to reconstruct clean features directly from perturbed inputs, enhancing adversarial robustness efficiently within the YOLOv5 framework. Experiments on the COCO vehicle subset demonstrate that our approach improves bounding-box mAP from 0.1640 to 0.1700 (+3.7%) and mAP@50 by 10.8%, validating its effectiveness against Perlin-noise-based attacks. To the best of our knowledge, this is the first work to introduce a single-layer convolutional autoencoder for real-time adversarial denoising defense, achieving an optimal balance between computational efficiency and deployment flexibility.

Technology Category

Application Category

πŸ“ Abstract
Deep learning-based object detection models play a critical role in real-world applications such as autonomous driving and security surveillance systems, yet they remain vulnerable to adversarial examples. In this work, we propose an autoencoder-based denoising defense to recover object detection performance degraded by adversarial perturbations. We conduct adversarial attacks using Perlin noise on vehicle-related images from the COCO dataset, apply a single-layer convolutional autoencoder to remove the perturbations, and evaluate detection performance using YOLOv5. Our experiments demonstrate that adversarial attacks reduce bbox mAP from 0.2890 to 0.1640, representing a 43.3% performance degradation. After applying the proposed autoencoder defense, bbox mAP improves to 0.1700 (3.7% recovery) and bbox mAP@50 increases from 0.2780 to 0.3080 (10.8% improvement). These results indicate that autoencoder-based denoising can provide partial defense against adversarial attacks without requiring model retraining.
Problem

Research questions and friction points this paper is trying to address.

Defends object detection models against adversarial attacks using autoencoder denoising.
Recovers detection performance degraded by adversarial perturbations like Perlin noise.
Provides partial defense without retraining models, improving mAP metrics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoencoder denoising removes adversarial perturbations from images
Defense applied to object detection models without retraining
Uses convolutional autoencoder to recover detection performance metrics
πŸ”Ž Similar Papers
No similar papers found.
M
Min Geun Song
School of Cybersecurity in Korea University, Hacking and Countermeasure Research Lab
G
Gang Min Kim
School of Cybersecurity in Korea University, Hacking and Countermeasure Research Lab
W
Woonmin Kim
School of Cybersecurity in Korea University, Hacking and Countermeasure Research Lab
Y
Yongsik Kim
School of Cybersecurity in Korea University, Hacking and Countermeasure Research Lab
J
Jeonghyun Sim
School of Cybersecurity in Korea University, Hacking and Countermeasure Research Lab
S
Sangbeom Park
School of Cybersecurity in Korea University, Hacking and Countermeasure Research Lab
Huy Kang Kim
Huy Kang Kim
School of Cybersecurity, Korea University
Data-driven SecurityVehicular NetworkOnline GamesUser BehaviorMalware Analysis