🤖 AI Summary
This work identifies and systematically analyzes a novel class of electromagnetic signal injection attacks targeting the analog front-end of CMOS image sensors, which consistently induce rainbow-like chromatic artifacts in captured imagery, thereby compromising visual data integrity. Through precise electromagnetic interference parameter tuning and end-to-end modeling of the image signal processing pipeline, we demonstrate— for the first time—that this physical-layer attack is cross-platform reproducible and significantly degrades downstream AI vision models. Experiments show that mainstream object detectors—including YOLOv5 and Faster R-CNN—exhibit up to 37.2% increased false detection rates and 29.8% higher miss detection rates on attacked images. Beyond discovering and mechanistically explaining the electromagnetic-induced rainbow artifact phenomenon, this study uncovers a previously unrecognized, fundamental security vulnerability residing in the analog domain of vision perception systems. Our findings provide critical insights for designing robust, physically resilient perception architectures resistant to electromagnetic adversarial manipulation.
📝 Abstract
Image sensors are integral to a wide range of safety- and security-critical systems, including surveillance infrastructure, autonomous vehicles, and industrial automation. These systems rely on the integrity of visual data to make decisions. In this work, we investigate a novel class of electromagnetic signal injection attacks that target the analog domain of image sensors, allowing adversaries to manipulate raw visual inputs without triggering conventional digital integrity checks. We uncover a previously undocumented attack phenomenon on CMOS image sensors: rainbow-like color artifacts induced in images captured by image sensors through carefully tuned electromagnetic interference. We further evaluate the impact of these attacks on state-of-the-art object detection models, showing that the injected artifacts propagate through the image signal processing pipeline and lead to significant mispredictions. Our findings highlight a critical and underexplored vulnerability in the visual perception stack, highlighting the need for more robust defenses against physical-layer attacks in such systems.