Case-based Reasoning Augmented Large Language Model Framework for Decision Making in Realistic Safety-Critical Driving Scenarios

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges of weak domain adaptation, poor situational grounding, and lack of experiential knowledge when applying large language models (LLMs) to safety-critical driving scenarios, this paper proposes a case-enhanced LLM decision-making framework. The method integrates case-based reasoning (CBR) with LLMs: it first performs semantic parsing of driving videos for situational understanding, then employs risk-aware prompting and similarity-driven retrieval of historical driving cases to generate context-sensitive, interpretable, and human-cognition-aligned evasive decisions. Its core innovation lies in explicitly injecting structured driving expertise into the LLM’s reasoning process, thereby enhancing decision reliability and alignment with human expert behavior. Experiments demonstrate significant improvements in decision accuracy, plausibility, and robustness across multiple open-source LLMs; effectiveness is further validated in complex real-world driving scenarios.

Technology Category

Application Category

📝 Abstract
Driving in safety-critical scenarios requires quick, context-aware decision-making grounded in both situational understanding and experiential reasoning. Large Language Models (LLMs), with their powerful general-purpose reasoning capabilities, offer a promising foundation for such decision-making. However, their direct application to autonomous driving remains limited due to challenges in domain adaptation, contextual grounding, and the lack of experiential knowledge needed to make reliable and interpretable decisions in dynamic, high-risk environments. To address this gap, this paper presents a Case-Based Reasoning Augmented Large Language Model (CBR-LLM) framework for evasive maneuver decision-making in complex risk scenarios. Our approach integrates semantic scene understanding from dashcam video inputs with the retrieval of relevant past driving cases, enabling LLMs to generate maneuver recommendations that are both context-sensitive and human-aligned. Experiments across multiple open-source LLMs show that our framework improves decision accuracy, justification quality, and alignment with human expert behavior. Risk-aware prompting strategies further enhance performance across diverse risk types, while similarity-based case retrieval consistently outperforms random sampling in guiding in-context learning. Case studies further demonstrate the framework's robustness in challenging real-world conditions, underscoring its potential as an adaptive and trustworthy decision-support tool for intelligent driving systems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing decision-making in safety-critical driving scenarios
Addressing LLM limitations in domain adaptation and contextual grounding
Integrating experiential knowledge for reliable, interpretable driving decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates case-based reasoning with LLMs
Enhances decision accuracy with past cases
Uses risk-aware prompting for better performance
🔎 Similar Papers
No similar papers found.