Audio-3DVG: Unified Audio - Point Cloud Fusion for 3D Visual Grounding

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the underexplored, challenging problem of 3D visual grounding (3DVG) using spoken instructions—rather than text—to localize target objects in point clouds. We propose the first unified audio–point cloud fusion framework for this task. Methodologically, we decouple speech input into object mention detection and audio-guided attention to enhance fine-grained speech–scene alignment; further, we introduce a multi-label classification head and a lightweight audio representation module that jointly leverages ASR features for robust spatial-semantic modeling of speech. Evaluated on mainstream 3DVG benchmarks—including ScanRefer and SR3D—our approach achieves state-of-the-art performance, with localization accuracy on par with text-based methods. This work constitutes the first systematic validation of spoken language as a viable and effective modality for 3D visual understanding, demonstrating its practical potential in real-world multimodal interaction scenarios.

Technology Category

Application Category

📝 Abstract
3D Visual Grounding (3DVG) involves localizing target objects in 3D point clouds based on natural language. While prior work has made strides using textual descriptions, leveraging spoken language-known as Audio-based 3D Visual Grounding-remains underexplored and challenging. Motivated by advances in automatic speech recognition (ASR) and speech representation learning, we propose Audio-3DVG, a simple yet effective framework that integrates audio and spatial information for enhanced grounding. Rather than treating speech as a monolithic input, we decompose the task into two complementary components. First, we introduce Object Mention Detection, a multi-label classification task that explicitly identifies which objects are referred to in the audio, enabling more structured audio-scene reasoning. Second, we propose an Audio-Guided Attention module that captures interactions between candidate objects and relational speech cues, improving target discrimination in cluttered scenes. To support benchmarking, we synthesize audio descriptions for standard 3DVG datasets, including ScanRefer, Sr3D, and Nr3D. Experimental results demonstrate that Audio-3DVG not only achieves new state-of-the-art performance in audio-based grounding, but also competes with text-based methods-highlighting the promise of integrating spoken language into 3D vision tasks.
Problem

Research questions and friction points this paper is trying to address.

Localizing objects in 3D point clouds using spoken language
Integrating audio and spatial information for 3D grounding
Improving target discrimination in cluttered 3D scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified audio-point cloud fusion framework
Object Mention Detection for audio-scene reasoning
Audio-Guided Attention module for target discrimination
🔎 Similar Papers
No similar papers found.
D
Duc Cao-Dinh
Knovel Engineering Lab, Singapore
Khai Le-Duc
Khai Le-Duc
University of Toronto
Artificial IntelligenceHeal the world
Anh Dao
Anh Dao
Undergraduate Student, Michigan State University
Vision-languageMultimodal LLMEmbodied AILLM
B
Bach Phan Tat
KU Leuven, Belgium
Chris Ngo
Chris Ngo
Knovel Engineering
D
Duy M. H. Nguyen
German Research Center for Artificial Intelligence (DFKI), Germany; Max Planck Research School for Intelligent Systems (IMPRS-IS), Germany; University of Stuttgart, Germany
N
Nguyen X. Khanh
UC Berkeley, USA
Thanh Nguyen-Tang
Thanh Nguyen-Tang
Johns Hopkins University
Machine Learning