🤖 AI Summary
Visual-language navigation (VLN) faces two key challenges: (1) the disconnection between linguistic reasoning and visual perception, and (2) misalignment between the reasoning module’s objectives and the navigation policy’s optimization goals. To address these, we propose UNeMo—a unified framework featuring a novel Multimodal World Model (MWM) and a Hierarchical Prediction-Feedback Network (HPN). The MWM jointly predicts multimodal states (visual and linguistic) by integrating visual feature extraction, large language models, multimodal sequence modeling, and reinforcement learning, enabling future visual state forecasting and end-to-end co-training with the navigation policy. HPN facilitates bidirectional co-optimization of reasoning and decision-making via hierarchical prediction and feedback. Evaluated on R2R and REVERIE, UNeMo achieves absolute improvements of +2.1% and +0.7% in unseen-scene navigation accuracy over prior state-of-the-art methods, demonstrating the effectiveness of cross-modal collaborative reasoning and joint optimization.
📝 Abstract
Vision-and-Language Navigation (VLN) requires agents to autonomously navigate complex environments via visual images and natural language instruction--remains highly challenging. Recent research on enhancing language-guided navigation reasoning using pre-trained large language models (LLMs) has shown promising prospects. However, the reasoning of such methods is limited to the linguistic modality, lacking visual reasoning capabilities. Moreover, existing reasoning modules are optimized separately from navigation policies, leading to incompatibility and potential conflicts in optimization objectives. To tackle these challenges, we introduce UNeMo, a novel framework designed for the collaborative optimization of visual state reasoning and navigational decision-making. It introduces a Multimodal World Model (MWM) that takes visual features, language instructions, and navigational actions as inputs to jointly predict subsequent visual states, enabling cross-modal reasoning. Via a Hierarchical Prediction-Feedback (HPN) mechanism, MWM collaborates with navigation policies: the first layer generates actions using current vision-and-language features; MWM then infers post-action visual states to guide the second layer's fine-grained decisions. This forms a dynamic bidirectional promotion mechanism where MWM reasoning optimizes navigation policies, while policy decisions feedback to improve MWM's reasoning accuracy. Experiments on R2R and REVERIE datasets show UNeMo outperforms state-of-the-art methods by 2.1% and 0.7% in navigation accuracy for unseen scenes, validating its effectiveness.