π€ AI Summary
This work addresses the safety challenges posed by semantic uncertainties introduced through natural language instructions in vision-language-action (VLA)βdriven autonomous driving systems, which can lead to unpredictable and hazardous behaviors. To mitigate these risks, the paper proposes RAISE, a novel safety-case framework specifically designed for instruction-driven autonomy. RAISE integrates language-related hazards into the system safety design and extends the Hazard Analysis and Risk Assessment (HARA) methodology to support risk analysis and safety justification in high-level semantic scenarios. By modeling multimodal instruction-to-behavior mappings, the approach enables the construction of verifiable, evidence-based safety claims within the SimLingo platform, thereby establishing a structured paradigm for ensuring the safety of VLA-based autonomous systems.
π Abstract
Vision-Language-Action (VLA)-based driving systems represent a significant paradigm shift in autonomous driving since, by combining traffic scene understanding, linguistic interpretation, and action generation, these systems enable more flexible, adaptive, and instruction-responsive driving behaviors. However, despite their growing adoption and potential to support socially responsible autonomous driving while understanding high-level human instructions, VLA-based driving systems may exhibit new types of hazardous behaviors. Such as the addition of natural language inputs (e.g., user or navigation instructions) into the multimodal control loop, which may lead to unpredictable and unsafe behaviors that could endanger vehicle occupants and pedestrians. Hence, assuring the safety of these systems is crucial to help build trust in their operations. To support this, we propose a novel safety case design approach called RAISE. Our approach introduces novel patterns tailored to instruction-based driving systems such as VLA-based driving systems, an extension of Hazard Analysis and Risk Assessment (HARA) detailing safe scenarios and their outcomes, and a design technique to create the safety cases of VLA-based driving systems. A case study on SimLingo illustrates how our approach can be used to construct rigorous, evidence-based safety claims for this emerging class of autonomous driving systems.