Uncovering Grounding IDs: How External Cues Shape Multi-Modal Binding

๐Ÿ“… 2025-09-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large Vision-Language Models (LVLMs) exhibit limitations in structured reasoning and precise visual grounding. To address this, we propose โ€œGrounding IDsโ€โ€”an implicit symbolic identification mechanism induced by external visual structures (e.g., segmentation masks, bounding boxes) to explicitly model cross-modal binding between image regions and textual tokens. Through representation analysis, causal intervention, and attention probing, we empirically validate that Grounding IDs act as critical mediators in the embedding space: they significantly strengthen component-level cross-modal attention, narrow modality-specific representation gaps, improve grounding accuracy, and effectively suppress hallucination. This work constitutes the first formalization of external structural cues as an interpretable symbolic mechanism for LVLMs, establishing a novel paradigm for enhancing model interpretability, robustness, and alignment fidelity.

Technology Category

Application Category

๐Ÿ“ Abstract
Large vision-language models (LVLMs) show strong performance across multimodal benchmarks but remain limited in structured reasoning and precise grounding. Recent work has demonstrated that adding simple visual structures, such as partitions and annotations, improves accuracy, yet the internal mechanisms underlying these gains remain unclear. We investigate this phenomenon and propose the concept of Grounding IDs, latent identifiers induced by external cues that bind objects to their designated partitions across modalities. Through representation analysis, we find that these identifiers emerge as robust within-partition alignment in embedding space and reduce the modality gap between image and text. Causal interventions further confirm that these identifiers mediate binding between objects and symbolic cues. We show that Grounding IDs strengthen attention between related components, which in turn improves cross-modal grounding and reduces hallucinations. Taken together, our results identify Grounding IDs as a key symbolic mechanism explaining how external cues enhance multimodal binding, offering both interpretability and practical improvements in robustness.
Problem

Research questions and friction points this paper is trying to address.

Investigating how external cues improve multimodal binding in vision-language models
Identifying latent Grounding IDs that align objects across image-text modalities
Explaining mechanisms reducing hallucinations through enhanced cross-modal attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grounding IDs bind objects across modalities
Identifiers reduce modality gap in embedding space
External cues enhance cross-modal attention and grounding
๐Ÿ”Ž Similar Papers
Hosein Hasani
Hosein Hasani
Sharif University of Technology
Machine Learning
A
Amirmohammad Izadi
Department of Computer Engineering, Sharif University of Technology
F
Fatemeh Askari
Department of Computer Engineering, Sharif University of Technology
M
Mobin Bagherian
Department of Computer Engineering, Sharif University of Technology
S
Sadegh Mohammadian
Department of Computer Engineering, Sharif University of Technology
Mohammad Izadi
Mohammad Izadi
Department of Computer Engineering, Sharif University of Technology
Mahdieh Soleymani Baghshah
Mahdieh Soleymani Baghshah
Associate Professor, Computer Engineering Department, Sharif University of Technology
Deep LearningMachine Learning