Mono3DVG-EnSD: Enhanced Spatial-aware and Dimension-decoupled Text Encoding for Monocular 3D Visual Grounding

šŸ“… 2025-11-10
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
To address insufficient spatial description modeling and cross-dimensional textual feature interference in monocular 3D visual grounding, this paper proposes a spatially aware and dimensionally decoupled text encoding framework. Methodologically: (1) We design a CLIP-guided Lexical Certainty Adapter (CLIP-LCA) that dynamically masks high-certainty keywords while explicitly preserving implicit spatial relationships; (2) We introduce a Dimensional Decoupling Module (D²M) to separate 2D appearance-related and 3D spatial textual representations, enabling cross-modal alignment with consistent dimensional semantics. Evaluated on the Mono3DRefer benchmark, our approach achieves state-of-the-art performance, improving Far (Acc@0.5) by 13.54% over prior methods. This advancement significantly enhances the precision of natural-language-driven 3D object localization from single RGB images.

Technology Category

Application Category

šŸ“ Abstract
Monocular 3D Visual Grounding (Mono3DVG) is an emerging task that locates 3D objects in RGB images using text descriptions with geometric cues. However, existing methods face two key limitations. Firstly, they often over-rely on high-certainty keywords that explicitly identify the target object while neglecting critical spatial descriptions. Secondly, generalized textual features contain both 2D and 3D descriptive information, thereby capturing an additional dimension of details compared to singular 2D or 3D visual features. This characteristic leads to cross-dimensional interference when refining visual features under text guidance. To overcome these challenges, we propose Mono3DVG-EnSD, a novel framework that integrates two key components: the CLIP-Guided Lexical Certainty Adapter (CLIP-LCA) and the Dimension-Decoupled Module (D2M). The CLIP-LCA dynamically masks high-certainty keywords while retaining low-certainty implicit spatial descriptions, thereby forcing the model to develop a deeper understanding of spatial relationships in captions for object localization. Meanwhile, the D2M decouples dimension-specific (2D/3D) textual features from generalized textual features to guide corresponding visual features at same dimension, which mitigates cross-dimensional interference by ensuring dimensionally-consistent cross-modal interactions. Through comprehensive comparisons and ablation studies on the Mono3DRefer dataset, our method achieves state-of-the-art (SOTA) performance across all metrics. Notably, it improves the challenging Far(Acc@0.5) scenario by a significant +13.54%.
Problem

Research questions and friction points this paper is trying to address.

Addresses over-reliance on explicit keywords in monocular 3D visual grounding
Solves cross-dimensional interference between 2D and 3D textual features
Enhances spatial understanding for 3D object localization from text
Innovation

Methods, ideas, or system contributions that make the work stand out.

CLIP-LCA masks high-certainty keywords to focus on spatial descriptions
D2M decouples dimension-specific textual features to reduce interference
Framework ensures dimensionally-consistent cross-modal interactions for localization
šŸ”Ž Similar Papers
No similar papers found.
Y
Yuzhen Li
School of Artificial Intelligence and Robotics, Hunan University, Changsha, Hunan, China
M
Min Liu
School of Artificial Intelligence and Robotics, Hunan University, Changsha, Hunan, China
Zhaoyang Li
Zhaoyang Li
Ph.D student, University of Science and Technology of China
Computer Vision
Y
Yuan Bian
School of Artificial Intelligence and Robotics, Hunan University, Changsha, Hunan, China
Xueping Wang
Xueping Wang
Hunan Normal University
computer vision
E
Erbo Zhai
School of Artificial Intelligence and Robotics, Hunan University, Changsha, Hunan, China
Y
Yaonan Wang
School of Artificial Intelligence and Robotics, Hunan University, Changsha, Hunan, China