š¤ AI Summary
To address insufficient spatial description modeling and cross-dimensional textual feature interference in monocular 3D visual grounding, this paper proposes a spatially aware and dimensionally decoupled text encoding framework. Methodologically: (1) We design a CLIP-guided Lexical Certainty Adapter (CLIP-LCA) that dynamically masks high-certainty keywords while explicitly preserving implicit spatial relationships; (2) We introduce a Dimensional Decoupling Module (D²M) to separate 2D appearance-related and 3D spatial textual representations, enabling cross-modal alignment with consistent dimensional semantics. Evaluated on the Mono3DRefer benchmark, our approach achieves state-of-the-art performance, improving Far (Acc@0.5) by 13.54% over prior methods. This advancement significantly enhances the precision of natural-language-driven 3D object localization from single RGB images.
š Abstract
Monocular 3D Visual Grounding (Mono3DVG) is an emerging task that locates 3D objects in RGB images using text descriptions with geometric cues. However, existing methods face two key limitations. Firstly, they often over-rely on high-certainty keywords that explicitly identify the target object while neglecting critical spatial descriptions. Secondly, generalized textual features contain both 2D and 3D descriptive information, thereby capturing an additional dimension of details compared to singular 2D or 3D visual features. This characteristic leads to cross-dimensional interference when refining visual features under text guidance. To overcome these challenges, we propose Mono3DVG-EnSD, a novel framework that integrates two key components: the CLIP-Guided Lexical Certainty Adapter (CLIP-LCA) and the Dimension-Decoupled Module (D2M). The CLIP-LCA dynamically masks high-certainty keywords while retaining low-certainty implicit spatial descriptions, thereby forcing the model to develop a deeper understanding of spatial relationships in captions for object localization. Meanwhile, the D2M decouples dimension-specific (2D/3D) textual features from generalized textual features to guide corresponding visual features at same dimension, which mitigates cross-dimensional interference by ensuring dimensionally-consistent cross-modal interactions. Through comprehensive comparisons and ablation studies on the Mono3DRefer dataset, our method achieves state-of-the-art (SOTA) performance across all metrics. Notably, it improves the challenging Far(Acc@0.5) scenario by a significant +13.54%.