๐ค AI Summary
This work investigates the genuine visual understanding capability of Vision-and-Language Navigation (VLN) models, revealing that recent performance gains often stem from overfitting rather than robust visual grounding. To address this, we propose a Multi-Branch Architecture (MBA): it systematically evaluates the critical role of visual input diversity in navigation robustness via depth-map guidance, viewpoint perturbation, and noise injection; and employs a lightweight, topology-free multi-branch feature fusion mechanism compatible with mainstream VLN backbone models. Experiments demonstrate that MBA achieves or surpasses state-of-the-art performance on the R2R, REVERIE, and SOON benchmarksโnot by enhancing single-frame visual quality, but solely by optimizing visual input arrangement. The implementation is publicly available.
๐ Abstract
Autonomous navigation guided by natural language instructions in embodied environments remains a challenge for vision-language navigation (VLN) agents. Although recent advancements in learning diverse and fine-grained visual environmental representations have shown promise, the fragile performance improvements may not conclusively attribute to enhanced visual grounding,a limitation also observed in related vision-language tasks. In this work, we preliminarily investigate whether advanced VLN models genuinely comprehend the visual content of their environments by introducing varying levels of visual perturbations. These perturbations include ground-truth depth images, perturbed views and random noise. Surprisingly, we experimentally find that simple branch expansion, even with noisy visual inputs, paradoxically improves the navigational efficacy. Inspired by these insights, we further present a versatile Multi-Branch Architecture (MBA) designed to delve into the impact of both the branch quantity and visual quality. The proposed MBA extends a base agent into a multi-branch variant, where each branch processes a different visual input. This approach is embarrassingly simple yet agnostic to topology-based VLN agents. Extensive experiments on three VLN benchmarks (R2R, REVERIE, SOON) demonstrate that our method with optimal visual permutations matches or even surpasses state-of-the-art results. The source code is available at here.