UNeMo: Collaborative Visual-Language Reasoning and Navigation via a Multimodal World Model

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual-language navigation (VLN) faces two key challenges: (1) the disconnection between linguistic reasoning and visual perception, and (2) misalignment between the reasoning module’s objectives and the navigation policy’s optimization goals. To address these, we propose UNeMo—a unified framework featuring a novel Multimodal World Model (MWM) and a Hierarchical Prediction-Feedback Network (HPN). The MWM jointly predicts multimodal states (visual and linguistic) by integrating visual feature extraction, large language models, multimodal sequence modeling, and reinforcement learning, enabling future visual state forecasting and end-to-end co-training with the navigation policy. HPN facilitates bidirectional co-optimization of reasoning and decision-making via hierarchical prediction and feedback. Evaluated on R2R and REVERIE, UNeMo achieves absolute improvements of +2.1% and +0.7% in unseen-scene navigation accuracy over prior state-of-the-art methods, demonstrating the effectiveness of cross-modal collaborative reasoning and joint optimization.

Technology Category

Application Category

📝 Abstract
Vision-and-Language Navigation (VLN) requires agents to autonomously navigate complex environments via visual images and natural language instruction--remains highly challenging. Recent research on enhancing language-guided navigation reasoning using pre-trained large language models (LLMs) has shown promising prospects. However, the reasoning of such methods is limited to the linguistic modality, lacking visual reasoning capabilities. Moreover, existing reasoning modules are optimized separately from navigation policies, leading to incompatibility and potential conflicts in optimization objectives. To tackle these challenges, we introduce UNeMo, a novel framework designed for the collaborative optimization of visual state reasoning and navigational decision-making. It introduces a Multimodal World Model (MWM) that takes visual features, language instructions, and navigational actions as inputs to jointly predict subsequent visual states, enabling cross-modal reasoning. Via a Hierarchical Prediction-Feedback (HPN) mechanism, MWM collaborates with navigation policies: the first layer generates actions using current vision-and-language features; MWM then infers post-action visual states to guide the second layer's fine-grained decisions. This forms a dynamic bidirectional promotion mechanism where MWM reasoning optimizes navigation policies, while policy decisions feedback to improve MWM's reasoning accuracy. Experiments on R2R and REVERIE datasets show UNeMo outperforms state-of-the-art methods by 2.1% and 0.7% in navigation accuracy for unseen scenes, validating its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Enhancing visual reasoning in language-guided navigation systems
Resolving incompatibility between reasoning modules and navigation policies
Enabling collaborative optimization of multimodal perception and action
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal World Model for cross-modal reasoning
Hierarchical Prediction-Feedback mechanism for collaboration
Joint optimization of visual reasoning and navigation policies
🔎 Similar Papers
No similar papers found.
Changxin Huang
Changxin Huang
Shenzhen University, Assistant Professor
RoboticsReinforcement learning
Lv Tang
Lv Tang
University of Alberta. Former researcher @ UCAS/Nanjing University
Computer VisionMLLMVideo CompressionImage Segmentation
Z
Zhaohuan Zhan
Department of Engineering, Shenzhen MSU-BIT University, Shenzhen, China
L
Lisha Yu
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
R
Runhao Zeng
Artificial Intelligence Research Institute, Shenzhen MSU-BIT University, Shenzhen, China
Z
Zun Liu
School of Artificial Intelligence, Shenzhen University, Shenzhen, China
Z
Zhengjie Wang
School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
J
Jianqiang Li
School of Artificial Intelligence, Shenzhen University, Shenzhen, China