🤖 AI Summary
To address insufficient human-like scene understanding, risk perception, and decision interpretability in urban autonomous driving, this paper proposes a unified decision-making and motion control framework integrating vision-language models (VLMs) with retrieval-augmented generation (RAG). Methodologically, it introduces a two-stage reasoning mechanism: (i) high-level context-aware semantic understanding and risk identification via VLM-RAG; and (ii) low-level trajectory modeling using a lightweight multi-kernel decomposed LSTM coupled with a context-aware potential function to capture short-term motion trends and generate interpretable driving behaviors. This hierarchical architecture decouples semantic reasoning from motion control, enhancing both transparency and dynamic adaptability. Extensive evaluations in simulation and real-world vehicle experiments demonstrate superior decision rationality and safety in complex urban scenarios. The implementation is publicly available.
📝 Abstract
Scene understanding and risk-aware attentions are crucial for human drivers to make safe and effective driving decisions. To imitate this cognitive ability in urban autonomous driving while ensuring the transparency and interpretability, we propose a vision-language model (VLM)-enhanced unified decision-making and motion control framework, named VLM-UDMC. This framework incorporates scene reasoning and risk-aware insights into an upper-level slow system, which dynamically reconfigures the optimal motion planning for the downstream fast system. The reconfiguration is based on real-time environmental changes, which are encoded through context-aware potential functions. More specifically, the upper-level slow system employs a two-step reasoning policy with Retrieval-Augmented Generation (RAG), leveraging foundation models to process multimodal inputs and retrieve contextual knowledge, thereby generating risk-aware insights. Meanwhile, a lightweight multi-kernel decomposed LSTM provides real-time trajectory predictions for heterogeneous traffic participants by extracting smoother trend representations for short-horizon trajectory prediction. The effectiveness of the proposed VLM-UDMC framework is verified via both simulations and real-world experiments with a full-size autonomous vehicle. It is demonstrated that the presented VLM-UDMC effectively leverages scene understanding and attention decomposition for rational driving decisions, thus improving the overall urban driving performance. Our open-source project is available at https://github.com/henryhcliu/vlmudmc.git.