๐ค AI Summary
The spatial referring expression grounding (REG) field lacks a systematic survey of Transformer-based approaches, benchmark datasets, evaluation metrics, and industrial applicability. Method: This paper presents the first comprehensive review of Transformer-based spatial REG research from 2018 to 2025, covering model architectures (e.g., cross-modal attention, multimodal representation learning), mainstream benchmarks (RefCOCO series, G-Ref), and evaluation protocols (IoU, Acc@0.5). Through structured comparative analysis, it traces technical evolutionโfrom single-stage alignment to hierarchical reasoning, and from reliance on synthetic data to generalization in real-world scenarios. Contribution/Results: The paper proposes an industrial deployment guideline with best practices, offering theoretical foundations and methodological insights for developing robust, interpretable, and production-ready vision-language alignment models. It bridges academic advances with practical engineering requirements, facilitating reproducible, scalable, and trustworthy spatial REG systems.
๐ Abstract
Spatial grounding, the process of associating natural language expressions with corresponding image regions, has rapidly advanced due to the introduction of transformer-based models, significantly enhancing multimodal representation and cross-modal alignment. Despite this progress, the field lacks a comprehensive synthesis of current methodologies, dataset usage, evaluation metrics, and industrial applicability. This paper presents a systematic literature review of transformer-based spatial grounding approaches from 2018 to 2025. Our analysis identifies dominant model architectures, prevalent datasets, and widely adopted evaluation metrics, alongside highlighting key methodological trends and best practices. This study provides essential insights and structured guidance for researchers and practitioners, facilitating the development of robust, reliable, and industry-ready transformer-based spatial grounding models.