Beyond Crash: Hijacking Your Autonomous Vehicle for Fun and Profit

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing physical adversarial attacks struggle to achieve long-term, stealthy trajectory manipulation against autonomous driving systems. This work proposes a novel path-level hijacking approach that, for the first time, formulates the attack as a closed-loop control problem. By deploying a reconfigurable display on the rear of an attacker vehicle, the method generates temporally persistent adversarial perturbations that continuously steer a vision-based end-to-end autonomous driving system away from its intended route toward an attacker-specified destination, all while maintaining natural driving behavior. The approach integrates adversarial patch generation, online interaction-based adjustment, and robustness optimization to effectively handle variations in viewpoint, illumination, weather, and background clutter. Implemented on the physical attack platform JackZebra, experimental results demonstrate significantly higher success rates in real-world scenarios compared to existing methods.

Technology Category

Application Category

📝 Abstract
Autonomous Vehicles (AVs), especially vision-based AVs, are rapidly being deployed without human operators. As AVs operate in safety-critical environments, understanding their robustness in an adversarial environment is an important research problem. Prior physical adversarial attacks on vision-based autonomous vehicles predominantly target immediate safety failures (e.g., a crash, a traffic-rule violation, or a transient lane departure) by inducing a short-lived perception or control error. This paper shows a qualitatively different risk: a long-horizon route integrity compromise, where an attacker gradually steers a victim AV away from its intended route and into an attacker-chosen destination while the victim continues to drive"normally."This will not pose a danger to the victim vehicle itself, but also to potential passengers sitting inside the vehicle. In this paper, we design and implement the first adversarial framework, called JackZebra, that performs route-level hijacking of a vision-based end-to-end driving stack using a physically plausible attacker vehicle with a reconfigurable display mounted on the rear. The central challenge is temporal persistence: adversarial influence must remain effective in changing viewpoints, lighting, weather, traffic, and the victim's continual replanning -- without triggering conspicuous failures. Our key insight is to treat route hijacking as a closed-loop control problem and to convert adversarial patches into steering primitives that can be selected online via an interactive adjustment loop. Our adversarial patches are also carefully designed against worst-case background and sensor variations so that the adversarial impacts on the victim. Our evaluation shows that JackZebra can successfully hijack victim vehicles to deviate from original routes and stop at adversarial destinations with a high success rate.
Problem

Research questions and friction points this paper is trying to address.

autonomous vehicles
adversarial attacks
route hijacking
vision-based driving
long-horizon integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

route hijacking
adversarial patches
end-to-end autonomous driving
closed-loop control
physical adversarial attack
🔎 Similar Papers
No similar papers found.
Q
Qi Sun
Johns Hopkins University
A
Ahmed Abdo
Johns Hopkins University
L
Luis Burbano
University of California, Santa Cruz
Ziyang Li
Ziyang Li
Johns Hopkins University
Programming LanguagesMachine Learning
Yaxing Yao
Yaxing Yao
Assistant Professor at Johns Hopkins
PrivacyIoTsHCI
A
Alvaro Cardenas
University of California, Santa Cruz
Yinzhi Cao
Yinzhi Cao
Johns Hopkins University
Computer Security