VLM-UDMC: VLM-Enhanced Unified Decision-Making and Motion Control for Urban Autonomous Driving

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient human-like scene understanding, risk perception, and decision interpretability in urban autonomous driving, this paper proposes a unified decision-making and motion control framework integrating vision-language models (VLMs) with retrieval-augmented generation (RAG). Methodologically, it introduces a two-stage reasoning mechanism: (i) high-level context-aware semantic understanding and risk identification via VLM-RAG; and (ii) low-level trajectory modeling using a lightweight multi-kernel decomposed LSTM coupled with a context-aware potential function to capture short-term motion trends and generate interpretable driving behaviors. This hierarchical architecture decouples semantic reasoning from motion control, enhancing both transparency and dynamic adaptability. Extensive evaluations in simulation and real-world vehicle experiments demonstrate superior decision rationality and safety in complex urban scenarios. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Scene understanding and risk-aware attentions are crucial for human drivers to make safe and effective driving decisions. To imitate this cognitive ability in urban autonomous driving while ensuring the transparency and interpretability, we propose a vision-language model (VLM)-enhanced unified decision-making and motion control framework, named VLM-UDMC. This framework incorporates scene reasoning and risk-aware insights into an upper-level slow system, which dynamically reconfigures the optimal motion planning for the downstream fast system. The reconfiguration is based on real-time environmental changes, which are encoded through context-aware potential functions. More specifically, the upper-level slow system employs a two-step reasoning policy with Retrieval-Augmented Generation (RAG), leveraging foundation models to process multimodal inputs and retrieve contextual knowledge, thereby generating risk-aware insights. Meanwhile, a lightweight multi-kernel decomposed LSTM provides real-time trajectory predictions for heterogeneous traffic participants by extracting smoother trend representations for short-horizon trajectory prediction. The effectiveness of the proposed VLM-UDMC framework is verified via both simulations and real-world experiments with a full-size autonomous vehicle. It is demonstrated that the presented VLM-UDMC effectively leverages scene understanding and attention decomposition for rational driving decisions, thus improving the overall urban driving performance. Our open-source project is available at https://github.com/henryhcliu/vlmudmc.git.
Problem

Research questions and friction points this paper is trying to address.

Enhancing urban autonomous driving with VLM-based decision-making
Improving scene understanding and risk-aware attention for safety
Integrating real-time environmental changes into motion planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLM-enhanced unified decision-making and motion control
Retrieval-Augmented Generation for risk-aware insights
Lightweight multi-kernel LSTM for trajectory prediction
🔎 Similar Papers
No similar papers found.
H
Haichao Liu
Robotics and Autonomous Systems Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China, and also with the Department of Electrical and Computer Engineering, National University of Singapore, Singapore
Haoren Guo
Haoren Guo
PhD Candidate, National University of Singapore
deep learningtime seriesPDM
P
Pei Liu
Robotics and Autonomous Systems Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China
B
Benshan Ma
Robotics and Autonomous Systems Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China
Y
Yuxiang Zhang
Department of Electrical and Computer Engineering, National University of Singapore, Singapore
J
Jun Ma
Robotics and Autonomous Systems Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China, and also with the Division of Emerging Interdisciplinary Areas, The Hong Kong University of Science and Technology, Hong Kong SAR, China
T
Tong Heng Lee
Department of Electrical and Computer Engineering, National University of Singapore, Singapore