🤖 AI Summary
In referring expression grounding, misalignment between linguistic expressions and visual targets arises primarily from decoder initialization lacking semantic guidance and ineffective utilization of multi-level image features. This paper proposes RefFormer to address these challenges. First, it introduces a plug-and-play CLIP-driven query adaptation module that generates semantically guided reference queries, enhancing decoder focus on target regions. Second, it pioneers multi-level image feature fusion within the DETR architecture while keeping the CLIP backbone frozen—thereby preserving strong visual representation capability while enabling lightweight adaptation. Crucially, RefFormer avoids fine-tuning the visual backbone, significantly improving training efficiency. Extensive experiments demonstrate state-of-the-art performance across five standard benchmarks—including RefCOCO and RefCOCO+—with consistent improvements in both localization accuracy and convergence speed.
📝 Abstract
Visual Grounding aims to localize the referring object in an image given a natural language expression. Recent advancements in DETR-based visual grounding methods have attracted considerable attention, as they directly predict the coordinates of the target object without relying on additional efforts, such as pre-generated proposal candidates or pre-defined anchor boxes. However, existing research primarily focuses on designing stronger multi-modal decoder, which typically generates learnable queries by random initialization or by using linguistic embeddings. This vanilla query generation approach inevitably increases the learning difficulty for the model, as it does not involve any target-related information at the beginning of decoding. Furthermore, they only use the deepest image feature during the query learning process, overlooking the importance of features from other levels. To address these issues, we propose a novel approach, called RefFormer. It consists of the query adaption module that can be seamlessly integrated into CLIP and generate the referential query to provide the prior context for decoder, along with a task-specific decoder. By incorporating the referential query into the decoder, we can effectively mitigate the learning difficulty of the decoder, and accurately concentrate on the target object. Additionally, our proposed query adaption module can also act as an adapter, preserving the rich knowledge within CLIP without the need to tune the parameters of the backbone network. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method, outperforming state-of-the-art approaches on five visual grounding benchmarks.