MambaPlace:Text-to-Point-Cloud Cross-Modal Place Recognition with Attention Mamba Mechanisms

📅 2024-08-28
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of limited visual information and insufficient intra- and inter-modal correlation modeling between text and 3D point clouds in robot cross-modal localization, this paper introduces the State Space Model (Mamba) to end-to-end text-to-3D place recognition for the first time. We propose a two-stage framework: coarse-grained unimodal encoding followed by fine-grained cross-modal fusion. Our key innovations include Text Attention Mamba (TAM), Point Clouds Mamba (PCM), and a cascaded Cross-Attention Mamba (CCAM), which overcome Transformer limitations in capturing long-range dependencies and enabling dynamic cross-modal interactions. The architecture integrates a pretrained T5 text encoder, a point cloud instance encoder, and a cross-modal positional offset regression head. Evaluated on KITTI360Pose, our method significantly outperforms state-of-the-art approaches, demonstrating Mamba’s effectiveness and generalization capability for geometric–linguistic alignment tasks.

Technology Category

Application Category

📝 Abstract
Vision Language Place Recognition (VLVPR) enhances robot localization performance by incorporating natural language descriptions from images. By utilizing language information, VLVPR directs robot place matching, overcoming the constraint of solely depending on vision. The essence of multimodal fusion lies in mining the complementary information between different modalities. However, general fusion methods rely on traditional neural architectures and are not well equipped to capture the dynamics of cross modal interactions, especially in the presence of complex intra modal and inter modal correlations. To this end, this paper proposes a novel coarse to fine and end to end connected cross modal place recognition framework, called MambaPlace. In the coarse localization stage, the text description and 3D point cloud are encoded by the pretrained T5 and instance encoder, respectively. They are then processed using Text Attention Mamba (TAM) and Point Clouds Mamba (PCM) for data enhancement and alignment. In the subsequent fine localization stage, the features of the text description and 3D point cloud are cross modally fused and further enhanced through cascaded Cross Attention Mamba (CCAM). Finally, we predict the positional offset from the fused text point cloud features, achieving the most accurate localization. Extensive experiments show that MambaPlace achieves improved localization accuracy on the KITTI360Pose dataset compared to the state of the art methods.
Problem

Research questions and friction points this paper is trying to address.

Enhances robot localization with text
Improves cross-modal place recognition
Introduces MambaPlace for accurate localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention Mamba Mechanisms
Text-to-Point-Cloud Fusion
Cascaded Cross Attention Mamba
🔎 Similar Papers
No similar papers found.
T
Tianyi Shang
School of Mechanical Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China; Department of Electronic and Information Engineering, Fuzhou University, Fuzhou 350100, China
Z
Zhenyu Li
School of Mechanical Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
W
Wenhao Pei
School of Mechanical Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
P
Pengjie Xu
School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China
Z
ZhaoJun Deng
Fanchen Kong
Fanchen Kong
KU Leuven