🤖 AI Summary
Existing benchmarks inadequately expose the internal mechanisms underlying visual perception in advanced vision-language models (VLMs), obscuring fundamental limitations in basic visual understanding.
Method: We propose a feature-probing analytical paradigm to quantitatively assess representational readability, information preservation, and task relevance across intermediate layers—including the visual encoder, cross-modal projection module, and LLM decoder—using a diagnostic benchmark covering core visual dimensions (e.g., shape, color, spatial relations).
Contribution/Results: Our analysis reveals a critical “perception–reasoning disconnect”: while VLMs achieve strong end-to-end performance, their low-level visual representations exhibit poor robustness, intermediate-layer semantics suffer severe degradation, and the cross-modal projection module emerges as a key bottleneck. This work provides the first empirical, interpretable evidence of intrinsic architectural fractures in VLM visual understanding, offering concrete, layer-wise insights to guide principled architectural improvements.
📝 Abstract
Vision-language Models (VLMs) have emerged as general-purpose tools for addressing a variety of complex computer vision problems. Such models have been shown to be highly capable, but, at the same time, lacking some basic visual understanding skills. In this paper, we set out to understand the limitations of SoTA VLMs on fundamental visual tasks by constructing a series of tests that probe which components of design, specifically, may be lacking. Importantly, we go significantly beyond the current benchmarks, which simply measure the final performance of VLM response, by also comparing and contrasting it to the performance of probes trained directly on features obtained from the visual encoder, intermediate vision-language projection and LLM-decoder output. In doing so, we uncover shortcomings in VLMs and make a number of important observations about their capabilities, robustness and how they process visual information. We hope our insights will guide progress in further improving VLMs.