🤖 AI Summary
This work addresses zero-shot visual grounding: precisely localizing image regions corresponding to free-form text descriptions using only a frozen large vision-language model (LVLM), without fine-tuning or auxiliary modules. Methodologically, we discover for the first time that LVLMs inherently contain a small number of critical cross-attention heads—termed “localization heads” (only three)—that exhibit strong text-to-image semantic alignment; spatial regions matching the description are directly extracted from their text-to-image attention maps. Our core contribution is revealing that LVLMs possess intrinsic, high-fidelity grounding capability without any training. Evaluated on standard benchmarks including RefCOCO, our approach achieves state-of-the-art zero-shot performance—comparable to leading fine-tuned methods—while eliminating parameter updates and additional architecture design. This significantly lowers deployment barriers and computational overhead for visual grounding.
📝 Abstract
Visual grounding seeks to localize the image region corresponding to a free-form text description. Recently, the strong multimodal capabilities of Large Vision-Language Models (LVLMs) have driven substantial improvements in visual grounding, though they inevitably require fine-tuning and additional model components to explicitly generate bounding boxes or segmentation masks. However, we discover that a few attention heads in frozen LVLMs demonstrate strong visual grounding capabilities. We refer to these heads, which consistently capture object locations related to text semantics, as localization heads. Using localization heads, we introduce a straightforward and effective training-free visual grounding framework that utilizes text-to-image attention maps from localization heads to identify the target objects. Surprisingly, only three out of thousands of attention heads are sufficient to achieve competitive localization performance compared to existing LVLM-based visual grounding methods that require fine-tuning. Our findings suggest that LVLMs can innately ground objects based on a deep comprehension of the text-image relationship, as they implicitly focus on relevant image regions to generate informative text outputs. All the source codes will be made available to the public.