ShrinkBox: Backdoor Attack on Object Detection to Disrupt Collision Avoidance in Machine Learning-based Advanced Driver Assistance Systems

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modern ML-based Advanced Driver Assistance Systems (ADAS) rely critically on object detection outputs—particularly bounding box dimensions—for lightweight distance estimation, yet this dependency remains underexplored in security research. Method: We propose a novel geometry-aware backdoor attack targeting the bounding box size attribute of YOLO-style detectors. During training, we inject trigger-embedded poisoned samples that stealthily scale ground-truth bounding box dimensions—without altering positions or class labels—to systematically corrupt downstream distance estimation. Contribution/Results: The attack evades standard data sanitization and model performance monitoring. On KITTI, it achieves 96% attack success rate with only 4% poisoning ratio, tripling the mean absolute error in distance estimation and causing severe delays or failures in collision warnings. To our knowledge, this is the first backdoor attack explicitly exploiting geometric properties (i.e., box size) of detection outputs to compromise ADAS distance estimation, underscoring the necessity of output-layer semantic integrity in end-to-end safety evaluation.

Technology Category

Application Category

📝 Abstract
Advanced Driver Assistance Systems (ADAS) significantly enhance road safety by detecting potential collisions and alerting drivers. However, their reliance on expensive sensor technologies such as LiDAR and radar limits accessibility, particularly in low- and middle-income countries. Machine learning-based ADAS (ML-ADAS), leveraging deep neural networks (DNNs) with only standard camera input, offers a cost-effective alternative. Critical to ML-ADAS is the collision avoidance feature, which requires the ability to detect objects and estimate their distances accurately. This is achieved with specialized DNNs like YOLO, which provides real-time object detection, and a lightweight, detection-wise distance estimation approach that relies on key features extracted from the detections like bounding box dimensions and size. However, the robustness of these systems is undermined by security vulnerabilities in object detectors. In this paper, we introduce ShrinkBox, a novel backdoor attack targeting object detection in collision avoidance ML-ADAS. Unlike existing attacks that manipulate object class labels or presence, ShrinkBox subtly shrinks ground truth bounding boxes. This attack remains undetected in dataset inspections and standard benchmarks while severely disrupting downstream distance estimation. We demonstrate that ShrinkBox can be realized in the YOLOv9m object detector at an Attack Success Rate (ASR) of 96%, with only a 4% poisoning ratio in the training instances of the KITTI dataset. Furthermore, given the low error targets introduced in our relaxed poisoning strategy, we find that ShrinkBox increases the Mean Absolute Error (MAE) in downstream distance estimation by more than 3x on poisoned samples, potentially resulting in delays or prevention of collision warnings altogether.
Problem

Research questions and friction points this paper is trying to address.

Backdoor attack disrupts collision avoidance in ML-ADAS
Shrinks bounding boxes to evade detection and benchmarks
Causes significant errors in distance estimation for collisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

ShrinkBox subtly shrinks ground truth bounding boxes
Targets YOLOv9m with 96% Attack Success Rate
Increases distance estimation error by 3x
🔎 Similar Papers
No similar papers found.