Robust Driving Control for Autonomous Vehicles: An Intelligent General-sum Constrained Adversarial Reinforcement Learning Approach

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep reinforcement learning (DRL)-based autonomous driving policies are vulnerable to strategic, multi-step adversarial attacks in real-world deployment, suffering from three key limitations: (i) short-sighted defense mechanisms lacking long-horizon adversarial robustness; (ii) inability to reliably trigger genuine safety-critical events (e.g., collisions), leading to biased evaluation; and (iii) absence of robustness constraints, causing training instability and policy drift. To address these, we propose the Intelligent Generalized Constrained Adversarial Reinforcement Learning (IGCARL) framework, which innovatively integrates a strategic, goal-directed adversarial mechanism, a generalized safety-critical event triggering method, and a constraint-aware optimization training paradigm. Experimental results demonstrate that IGCARL reduces adversarial attack success rates by at least 27.9% compared to state-of-the-art methods, significantly enhancing policy robustness and safety under complex traffic scenarios.

Technology Category

Application Category

📝 Abstract
Deep reinforcement learning (DRL) has demonstrated remarkable success in developing autonomous driving policies. However, its vulnerability to adversarial attacks remains a critical barrier to real-world deployment. Although existing robust methods have achieved success, they still suffer from three key issues: (i) these methods are trained against myopic adversarial attacks, limiting their abilities to respond to more strategic threats, (ii) they have trouble causing truly safety-critical events (e.g., collisions), but instead often result in minor consequences, and (iii) these methods can introduce learning instability and policy drift during training due to the lack of robust constraints. To address these issues, we propose Intelligent General-sum Constrained Adversarial Reinforcement Learning (IGCARL), a novel robust autonomous driving approach that consists of a strategic targeted adversary and a robust driving agent. The strategic targeted adversary is designed to leverage the temporal decision-making capabilities of DRL to execute strategically coordinated multi-step attacks. In addition, it explicitly focuses on inducing safety-critical events by adopting a general-sum objective. The robust driving agent learns by interacting with the adversary to develop a robust autonomous driving policy against adversarial attacks. To ensure stable learning in adversarial environments and to mitigate policy drift caused by attacks, the agent is optimized under a constrained formulation. Extensive experiments show that IGCARL improves the success rate by at least 27.9% over state-of-the-art methods, demonstrating superior robustness to adversarial attacks and enhancing the safety and reliability of DRL-based autonomous driving.
Problem

Research questions and friction points this paper is trying to address.

Addressing strategic multi-step adversarial attacks on autonomous driving systems
Improving robustness against safety-critical events like collisions
Ensuring stable learning and preventing policy drift during training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intelligent General-sum Constrained Adversarial Reinforcement Learning approach
Strategic targeted adversary with multi-step coordinated attacks
Constrained formulation ensures stable learning and policy robustness
🔎 Similar Papers
No similar papers found.
J
Junchao Fan
Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, P.R.China
Xiaolin Chang
Xiaolin Chang
Beijing Jiaotong University
dependable and secure computing