🤖 AI Summary
Visual language models (VLMs) exhibit a significant bottleneck in extracting factual knowledge from images: factual question answering accuracy drops by an average of 19% when inputs are images rather than text. Method: Leveraging mechanistic interpretability techniques—including cross-modal attention flow analysis, inter-layer information propagation tracking, and attribution localization—we identify, for the first time, a “delayed information flow” from image tokens to query tokens, revealing a structural misalignment between mid-level visual processing and deep semantic reasoning. Contribution/Results: We pinpoint the core bottleneck: discriminative visual features activate only in deeper transformer layers, whereas essential image understanding occurs at mid-layers of the language decoder—severely constraining the number of layers available for subsequent coherent reasoning. This work provides empirically grounded, interpretable insights to guide VLM architecture optimization and the design of principled multimodal alignment mechanisms.
📝 Abstract
Vision-language models (VLMs) excel at extracting and reasoning about information from images. Yet, their capacity to leverage internal knowledge about specific entities remains underexplored. This work investigates the disparity in model performance when answering factual questions about an entity described in text versus depicted in an image. Our results reveal a significant accuracy drop --averaging 19%-- when the entity is presented visually instead of textually. We hypothesize that this decline arises from limitations in how information flows from image tokens to query tokens. We use mechanistic interpretability tools to reveal that, although image tokens are preprocessed by the vision encoder, meaningful information flow from these tokens occurs only in the much deeper layers. Furthermore, critical image processing happens in the language model's middle layers, allowing few layers for consecutive reasoning, highlighting a potential inefficiency in how the model utilizes its layers for reasoning. These insights shed light on the internal mechanics of VLMs and offer pathways for enhancing their reasoning capabilities.