🤖 AI Summary
Autonomous driving systems struggle with real-time perception and decision-making in dynamic, complex environments. Method: This paper proposes an active retrieval-augmented generation (RAG) framework tailored for vehicle-to-everything (V2X) cooperation. It integrates multi-sensor V2X data, chain-of-thought (CoT) prompting, and LLM-readable knowledge compilation to construct an online-updatable, multimodal environmental knowledge base. Contribution/Results: The framework overcomes key limitations of conventional large language models (LLMs) in low-latency response and cross-modal understanding, enabling context-aware autonomous decision-making. Experiments on a real-world V2X dataset demonstrate significant improvements in environmental perception accuracy and trajectory prediction performance. Specifically, the method achieves substantial advances in safety, environmental adaptability, and real-time decision-making capability. This work establishes a novel paradigm for LLM-driven autonomous driving systems.
📝 Abstract
This study addresses the critical need for enhanced situational awareness in autonomous driving (AD) by leveraging the contextual reasoning capabilities of large language models (LLMs). Unlike traditional perception systems that rely on rigid, label-based annotations, it integrates real-time, multimodal sensor data into a unified, LLMs-readable knowledge base, enabling LLMs to dynamically understand and respond to complex driving environments. To overcome the inherent latency and modality limitations of LLMs, a proactive Retrieval-Augmented Generation (RAG) is designed for AD, combined with a chain-of-thought prompting mechanism, ensuring rapid and context-rich understanding. Experimental results using real-world Vehicle-to-everything (V2X) datasets demonstrate significant improvements in perception and prediction performance, highlighting the potential of this framework to enhance safety, adaptability, and decision-making in next-generation AD systems.