๐ค AI Summary
This study systematically evaluates the role of visual input in multimodal large language models (MLLMs) for robot path planning, focusing on spatial reasoning, constraint satisfaction, and scalability bottlenecks. We introduce the first zero-shot and few-shot path planning benchmark for 2D grid environments, comprehensively evaluating 15 state-of-the-art MLLMs under varying grid scales to isolate the contributions of textual versus visual inputs. Results demonstrate that vision enhances path validity and optimality in small-scale grids but degrades markedly with increasing grid sizeโrevealing fundamental limitations in long-range spatial modeling and hard-constraint adherence. Our core contribution is a reproducible evaluation framework that quantifies, for the first time, the complementary boundaries between visual and textual modalities in path planning and empirically identifies model scalability thresholds.
๐ Abstract
Large Language Models (LLMs) show potential for enhancing robotic path planning. This paper assesses visual input's utility for multimodal LLMs in such tasks via a comprehensive benchmark. We evaluated 15 multimodal LLMs on generating valid and optimal paths in 2D grid environments, simulating simplified robotic planning, comparing text-only versus text-plus-visual inputs across varying model sizes and grid complexities. Our results indicate moderate success rates on simpler small grids, where visual input or few-shot text prompting offered some benefits. However, performance significantly degraded on larger grids, highlighting a scalability challenge. While larger models generally achieved higher average success, the visual modality was not universally dominant over well-structured text for these multimodal systems, and successful paths on simpler grids were generally of high quality. These results indicate current limitations in robust spatial reasoning, constraint adherence, and scalable multimodal integration, identifying areas for future LLM development in robotic path planning.