A Tool for Benchmarking Large Language Models'Robustness in Assessing the Realism of Driving Scenarios

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of robustness evaluation for large language models (LLMs) in assessing the realism of autonomous driving scenarios. To this end, we propose DriveRLR—the first systematic benchmark for evaluating LLMs’ robustness in driving-scenario discrimination. Methodologically, DriveRLR builds upon DeepScenario to enable scalable scenario perturbation and structured natural-language prompt engineering, augmented by a multi-round consistency evaluation protocol. Its key innovation lies in closing the loop between robustness assessment and simulation optimization: evaluation outcomes serve as feedback signals to guide high-fidelity, adversarial scenario generation. Experiments span state-of-the-art models—including GPT-5, Llama 4 Maverick, and Mistral Small 3.2—demonstrating DriveRLR’s sensitivity to fine-grained model capability differences and seamless integration into simulation-based testing pipelines. DriveRLR establishes a novel paradigm for safety-critical validation of LLM-driven autonomous driving systems.

Technology Category

Application Category

📝 Abstract
In recent years, autonomous driving systems have made significant progress, yet ensuring their safety remains a key challenge. To this end, scenario-based testing offers a practical solution, and simulation-based methods have gained traction due to the high cost and risk of real-world testing. However, evaluating the realism of simulated scenarios remains difficult, creating demand for effective assessment methods. Recent advances show that Large Language Models (LLMs) possess strong reasoning and generalization capabilities, suggesting their potential in assessing scenario realism through scenario-related textual prompts. Motivated by this, we propose DriveRLR, a benchmark tool to assess the robustness of LLMs in evaluating the realism of driving scenarios. DriveRLR generates mutated scenario variants, constructs prompts, which are then used to assess a given LLM's ability and robustness in determining the realism of driving scenarios. We validate DriveRLR on the DeepScenario dataset using three state-of-the-art LLMs: GPT-5, Llama 4 Maverick, and Mistral Small 3.2. Results show that DriveRLR effectively reveals differences in the robustness of various LLMs, demonstrating its effectiveness and practical value in scenario realism assessment. Beyond LLM robustness evaluation, DriveRLR can serve as a practical component in applications such as an objective function to guide scenario generation, supporting simulation-based ADS testing workflows.
Problem

Research questions and friction points this paper is trying to address.

Evaluating realism of simulated driving scenarios for autonomous systems
Assessing robustness of Large Language Models in scenario realism evaluation
Developing benchmark tool to test LLMs on mutated driving scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates mutated scenario variants for testing
Constructs prompts to assess LLM realism evaluation
Benchmarks LLM robustness in driving scenario assessment
🔎 Similar Papers
No similar papers found.