🤖 AI Summary
Visual-language navigation (VLN) suffers from poor zero-shot generalization and heavy reliance on joint semantic mapping. Method: This paper proposes a training-free, modular framework: (1) an LLM parses navigation instructions to extract landmarks and their temporal relations; (2) candidate locations are retrieved from a topological environment graph to generate path hypotheses; (3) a vision-language model (VLM) aligns full-panorama observation sequences with landmark sequences, refined via dynamic programming. Contribution/Results: By decoupling semantic parsing, path search, and cross-modal alignment—and eliminating explicit semantic mapping—the approach significantly improves interpretability and zero-shot generalization. On the R2R-Habitat benchmark, it outperforms state-of-the-art joint semantic mapping methods (e.g., VLMaps), demonstrating that zero-shot visual grounding provides critical performance gains for VLN.
📝 Abstract
In this work, we propose a modular approach for the Vision-Language Navigation (VLN) task by decomposing the problem into four sub-modules that use state-of-the-art Large Language Models (LLMs) and Vision-Language Models (VLMs) in a zero-shot setting. Given navigation instruction in natural language, we first prompt LLM to extract the landmarks and the order in which they are visited. Assuming the known model of the environment, we retrieve the top-k locations of the last landmark and generate $k$ path hypotheses from the starting location to the last landmark using the shortest path algorithm on the topological map of the environment. Each path hypothesis is represented by a sequence of panoramas. We then use dynamic programming to compute the alignment score between the sequence of panoramas and the sequence of landmark names, which match scores obtained from VLM. Finally, we compute the nDTW metric between the hypothesis that yields the highest alignment score to evaluate the path fidelity. We demonstrate superior performance compared to other approaches that use joint semantic maps like VLMaps cite{vlmaps} on the complex R2R-Habitat cite{r2r} instruction dataset and quantify in detail the effect of visual grounding on navigation performance.