Co-NavGPT: Multi-Robot Cooperative Visual Semantic Navigation Using Vision Language Models

📅 2023-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autonomous robots for vision-based target navigation in unknown environments lack commonsense reasoning capabilities and are predominantly single-agent systems, resulting in limited collaboration efficiency and robustness. This paper proposes a multi-robot collaborative visual semantic navigation framework: a vision-language model (Qwen-VL) serves as the global planner; multi-view submaps are fused into a unified semantic map; and frontier-region semantic encoding combined with multi-robot state fusion enables VLM-driven dynamic task allocation. To our knowledge, this is the first work to integrate VLMs into multi-robot collaborative navigation—achieving semantic-level coordination without task-specific fine-tuning and eliminating reliance on handcrafted priors. Our method achieves significant performance gains over baselines on the HM3D dataset; real-world validation is conducted on a quadruped robot platform; and ablation studies confirm that the VLM’s commonsense reasoning capability is critical to the observed improvements.
📝 Abstract
Visual target navigation is a critical capability for autonomous robots operating in unknown environments, particularly in human-robot interaction scenarios. While classical and learning-based methods have shown promise, most existing approaches lack common-sense reasoning and are typically designed for single-robot settings, leading to reduced efficiency and robustness in complex environments. To address these limitations, we introduce Co-NavGPT, a novel framework that integrates a Vision Language Model (VLM) as a global planner to enable common-sense multi-robot visual target navigation. Co-NavGPT aggregates sub-maps from multiple robots with diverse viewpoints into a unified global map, encoding robot states and frontier regions. The VLM uses this information to assign frontiers across the robots, facilitating coordinated and efficient exploration. Experiments on the Habitat-Matterport 3D (HM3D) demonstrate that Co-NavGPT outperforms existing baselines in terms of success rate and navigation efficiency, without requiring task-specific training. Ablation studies further confirm the importance of semantic priors from the VLM. We also validate the framework in real-world scenarios using quadrupedal robots. Supplementary video and code are available at: https://sites.google.com/view/co-navgpt2.
Problem

Research questions and friction points this paper is trying to address.

Enabling multi-robot visual semantic navigation using VLMs
Improving navigation efficiency via coordinated frontier assignment
Leveraging semantic priors for robust exploration without task-specific training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Vision Language Model for global planning
Aggregates multi-robot sub-maps into unified global map
Uses semantic priors for coordinated frontier assignment
🔎 Similar Papers
No similar papers found.