Symbolic Imitation Learning: From Black-Box to Explainable Driving Policies

📅 2023-09-27
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor interpretability and weak cross-scenario generalization of deep imitation learning policies in autonomous driving, this paper proposes Symbolic Imitation Learning (SIL), the first framework to integrate Inductive Logic Programming (ILP) into imitation learning. SIL automatically induces formal, human-readable, and verifiable driving rules from real-world trajectory data (highD). By combining symbolic rule induction with logic-constrained optimization, SIL transforms end-to-end black-box policies into transparent, logic-based decision models. Experimental results demonstrate that the induced rules achieve natural-language-level interpretability and significantly outperform state-of-the-art neural imitation learning methods on the highD dataset. Moreover, SIL improves generalization accuracy by over 23% on unseen traffic scenarios, effectively balancing safety-critical transparency with strong generalization capability.
📝 Abstract
Current methods of imitation learning (IL), primarily based on deep neural networks, offer efficient means for obtaining driving policies from real-world data but suffer from significant limitations in interpretability and generalizability. These shortcomings are particularly concerning in safety-critical applications like autonomous driving. In this paper, we address these limitations by introducing Symbolic Imitation Learning (SIL), a groundbreaking method that employs Inductive Logic Programming (ILP) to learn driving policies which are transparent, explainable and generalisable from available datasets. Utilizing the real-world highD dataset, we subject our method to a rigorous comparative analysis against prevailing neural-network-based IL methods. Our results demonstrate that SIL not only enhances the interpretability of driving policies but also significantly improves their applicability across varied driving situations. Hence, this work offers a novel pathway to more reliable and safer autonomous driving systems, underscoring the potential of integrating ILP into the domain of IL.
Problem

Research questions and friction points this paper is trying to address.

Developing explainable driving policies instead of black-box models
Addressing interpretability and generalizability limitations in imitation learning
Creating transparent autonomous driving systems using symbolic learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Inductive Logic Programming for explainable policies
Derives transparent driving rules from synthetic datasets
Maintains performance while enhancing policy interpretability
🔎 Similar Papers
No similar papers found.
I
Iman Sharifi
Connected and Autonomous Vehicles Lab (www.cav-lab.io) at the Department of Mechanical Engineering Sciences, University of Surrey, Guildford, UK
Saber Fallah
Saber Fallah
Professor at University of Surrey
Deep Reinforcement LearningSymbolic AIControlAutonomous VehiclesRobotics