๐ค AI Summary
Traditional pathfinding algorithms (e.g., A*, Dijkstra) suffer from low search efficiency and high memory overhead on large-scale raster maps, while recent LLM-A* approaches exhibit poor planning efficiency and instability due to spatial illusions and low-quality waypoints. To address these limitations, this paper proposes iLLM-A*, an efficient path-planning algorithm integrating large language models (LLMs) with enhanced search mechanisms. Its core contributions are: (1) a lightweight A* optimization reducing computational redundancy; (2) an incremental learningโbased waypoint generation framework ensuring high-quality, context-aware intermediate goals; and (3) an adaptive waypoint selection strategy dynamically balancing exploration and exploitation. Extensive experiments demonstrate that iLLM-A* achieves over 1000ร average speedup versus LLM-A* (up to 2349.5ร), reduces memory consumption by up to 58.6%, yields shorter paths, and significantly improves robustness across diverse map topologies.
๐ Abstract
Path planning in grid maps, arising from various applications, has garnered significant attention. Existing methods, such as A*, Dijkstra, and their variants, work well for small-scale maps but fail to address large-scale ones due to high search time and memory consumption. Recently, Large Language Models (LLMs) have shown remarkable performance in path planning but still suffer from spatial illusion and poor planning performance. Among all the works, LLM-A* cite{meng2024llm} leverages LLM to generate a series of waypoints and then uses A* to plan the paths between the neighboring waypoints. In this way, the complete path is constructed. However, LLM-A* still suffers from high computational time for large-scale maps. To fill this gap, we conducted a deep investigation into LLM-A* and found its bottleneck, resulting in limited performance. Accordingly, we design an innovative LLM-enhanced algorithm, abbr. as iLLM-A*. iLLM-A* includes 3 carefully designed mechanisms, including the optimization of A*, an incremental learning method for LLM to generate high-quality waypoints, and the selection of the appropriate waypoints for A* for path planning. Finally, a comprehensive evaluation on various grid maps shows that, compared with LLM-A*, iLLM-A* extbf{1) achieves more than $1000 imes$ speedup on average, and up to $2349.5 imes$ speedup in the extreme case, 2) saves up to $58.6%$ of the memory cost, 3) achieves both obviously shorter path length and lower path length standard deviation.}