🤖 AI Summary
To address the limited explainability, poor editability, and weak generalization of autonomous driving decision systems, this paper proposes the first method that deeply integrates large language models (LLMs) into a rule-based decision framework. Our approach comprises three synergistic modules: information-aware perception, LLM-driven generation of executable rules—including Python code—and rule-engine execution coupled with multi-round human-in-the-loop validation. This enables structured scene representation and traceable, auditable decisions. Key innovations include treating the LLM as a “rule compiler” that bridges high-level semantic understanding with formal, executable logic; enabling human intervention and iterative rule refinement; and eliminating reliance on large-scale labeled data or end-to-end training. Experiments demonstrate significant improvements over reinforcement learning and existing LLM-driven methods in decision accuracy, response latency, and explanation quality—confirming practical feasibility for real-vehicle deployment.
📝 Abstract
How to construct an interpretable autonomous driving decision-making system has become a focal point in academic research. In this study, we propose a novel approach that leverages large language models (LLMs) to generate executable, rule-based decision systems to address this challenge. Specifically, harnessing the strong reasoning and programming capabilities of LLMs, we introduce the ADRD(LLM-Driven Autonomous Driving Based on Rule-based Decision Systems) framework, which integrates three core modules: the Information Module, the Agents Module, and the Testing Module. The framework operates by first aggregating contextual driving scenario information through the Information Module, then utilizing the Agents Module to generate rule-based driving tactics. These tactics are iteratively refined through continuous interaction with the Testing Module. Extensive experimental evaluations demonstrate that ADRD exhibits superior performance in autonomous driving decision tasks. Compared to traditional reinforcement learning approaches and the most advanced LLM-based methods, ADRD shows significant advantages in terms of interpretability, response speed, and driving performance. These results highlight the framework's ability to achieve comprehensive and accurate understanding of complex driving scenarios, and underscore the promising future of transparent, rule-based decision systems that are easily modifiable and broadly applicable. To the best of our knowledge, this is the first work that integrates large language models with rule-based systems for autonomous driving decision-making, and our findings validate its potential for real-world deployment.