Dynamic Deception: When Pedestrians Team Up to Fool Autonomous Cars

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial attacks on autonomous driving perception models often fail to induce system-level failures due to insufficient spatiotemporal persistence. This work proposes the first multi-agent dynamic collaborative adversarial attack mechanism, wherein multiple pedestrians jointly carry adversarial patches and coordinate their movements to prolong and amplify the perturbation signal, thereby triggering anomalous vehicle behavior at the system level. Integrated within the CARLA platform using a realistic driving stack, experiments demonstrate that a dynamic collusion attack by two pedestrians successfully forces the vehicle to come to a complete stop in 50% of test scenarios, whereas attacks by a single pedestrian or static placements consistently fail. These findings reveal a critical gap between the robustness of perception models and the safety of end-to-end autonomous driving systems.

Technology Category

Application Category

📝 Abstract
Many adversarial attacks on autonomous-driving perception models fail to cause system-level failures once deployed in a full driving stack. The main reason for such ineffectiveness is that once deployed in a system (e.g., within a simulator), attacks tend to be spatially or temporally short-lived, due to the vehicle's dynamics, hence rarely influencing the vehicle behaviour. In this paper, we address both limitations by introducing a system-level attack in which multiple dynamic elements (e.g., two pedestrians) carry adversarial patches (e.g., on cloths) and jointly amplify their effect through coordination and motion. We evaluate our attacks in the CARLA simulator using a state-of-the-art autonomous driving agent. At the system level, single-pedestrian attacks fail in all runs (out of 10), while dynamic collusion by two pedestrians induces full vehicle stops in up to 50\% of runs, with static collusion yielding no successful attack at all. These results show that system-level failures arise only when adversarial signals persist over time and are amplified through coordinated actors, exposing a gap between model-level robustness and end-to-end safety.
Problem

Research questions and friction points this paper is trying to address.

adversarial attacks
autonomous driving
system-level failure
dynamic deception
coordinated actors
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic deception
adversarial patches
system-level attack
coordinated pedestrians
autonomous driving safety
🔎 Similar Papers
No similar papers found.