EmergeNav: Structured Embodied Inference for Zero-Shot Vision-and-Language Navigation in Continuous Environments

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Zero-shot vision-and-language navigation in continuous environments struggles to effectively translate the semantic priors of vision-language models into long-horizon embodied execution capabilities. To address this challenge, this work proposes EmergeNav, a novel framework that introduces an explicit Plan–Solve–Transition hierarchical execution structure. EmergeNav integrates goal-informed perception encoding (GIPE), contrastive dual-memory progress reasoning, and a role-disentangled Dual-FOV perception mechanism, enabling stable navigation without task-specific training or prior maps. Leveraging open-source VLM backbones such as Qwen3-VL, the method achieves success rates of 30.00% and 37.00% on the VLN-CE benchmark using only 8B and 32B parameter models, respectively, substantially outperforming existing zero-shot approaches.

Technology Category

Application Category

📝 Abstract
Zero-shot vision-and-language navigation in continuous environments (VLN-CE) remains challenging for modern vision-language models (VLMs). Although these models encode useful semantic priors, their open-ended reasoning does not directly translate into stable long-horizon embodied execution. We argue that the key bottleneck is not missing knowledge alone, but missing an execution structure for organizing instruction following, perceptual grounding, temporal progress, and stage verification. We propose EmergeNav, a zero-shot framework that formulates continuous VLN as structured embodied inference. EmergeNav combines a Plan--Solve--Transition hierarchy for stage-structured execution, GIPE for goal-conditioned perceptual extraction, contrastive dual-memory reasoning for progress grounding, and role-separated Dual-FOV sensing for time-aligned local control and boundary verification. On VLN-CE, EmergeNav achieves strong zero-shot performance using only open-source VLM backbones and no task-specific training, explicit maps, graph search, or waypoint predictors, reaching 30.00 SR with Qwen3-VL-8B and 37.00 SR with Qwen3-VL-32B. These results suggest that explicit execution structure is a key ingredient for turning VLM priors into stable embodied navigation behavior.
Problem

Research questions and friction points this paper is trying to address.

Vision-and-Language Navigation
Zero-shot Learning
Continuous Environments
Embodied AI
Execution Structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured embodied inference
zero-shot VLN
stage-structured execution
dual-memory reasoning
Dual-FOV sensing
🔎 Similar Papers
No similar papers found.
Kun Luo
Kun Luo
Zhejiang University
X
Xiaoguang Ma
Foshan Graduate School of Innovation, Northeastern University