π€ AI Summary
This study addresses the risks posed by large language models (LLMs) in safety-critical robotic decision-making, particularly due to rare but catastrophic errors. Focusing on fire evacuation scenarios, the authors propose a Safety-Oriented Spatial Reasoning (SOSR) framework comprising seven quantitative evaluation tasks grounded in ASCII maps and natural language instructions, designed to eliminate visual ambiguity and rigorously assess spatial reasoning capabilities and hallucination tendencies. Experimental results reveal that despite overall accuracy rates as high as 99%, LLMsβand in some cases vision-language models (VLMs)βcan still commit fatal errors under both complete and incomplete information, with navigation success rates dropping to 0% in certain configurations and even directing robots toward hazardous zones. These findings underscore that current LLMs are not yet suitable for direct deployment in safety-critical systems.
π Abstract
One mistake by an AI system in a safety-critical setting can cost lives. As Large Language Models (LLMs) become integral to robotics decision-making, the physical dimension of risk grows; a single wrong instruction can directly endanger human safety. This paper addresses the urgent need to systematically evaluate LLM performance in scenarios where even minor errors are catastrophic. Through a qualitative evaluation of a fire evacuation scenario, we identified critical failure cases in LLM-based decision-making. Based on these, we designed seven tasks for quantitative assessment, categorized into: Complete Information, Incomplete Information, and Safety-Oriented Spatial Reasoning (SOSR). Complete information tasks utilize ASCII maps to minimize interpretation ambiguity and isolate spatial reasoning from visual processing. Incomplete information tasks require models to infer missing context, testing for spatial continuity versus hallucinations. SOSR tasks use natural language to evaluate safe decision-making in life-threatening contexts. We benchmark various LLMs and Vision-Language Models (VLMs) across these tasks. Beyond aggregate performance, we analyze the implications of a 1% failure rate, highlighting how"rare"errors escalate into catastrophic outcomes. Results reveal serious vulnerabilities: several models achieved a 0% success rate in ASCII navigation, while in a simulated fire drill, models instructed robots to move toward hazardous areas instead of emergency exits. Our findings lead to a sobering conclusion: current LLMs are not ready for direct deployment in safety-critical systems. A 99% accuracy rate is dangerously misleading in robotics, as it implies one out of every hundred executions could result in catastrophic harm. We demonstrate that even state-of-the-art models cannot guarantee safety, and absolute reliance on them creates unacceptable risks.