🤖 AI Summary
Current vision-language models (VLMs) excel at object recognition but exhibit severe limitations in spatial reasoning—particularly in understanding relative spatial relationships. We identify two fundamental deficiencies: (1) visual embeddings are overly semanticized into “bag-of-words” representations, and (2) their large L2 norms suppress discriminative spatial cues. To address these issues, we propose two lightweight, interpretable interventions: (1) norm normalization of visual embeddings to alleviate information suppression, and (2) explicit extraction of mid-level spatial features. We construct a synthetic spatial reasoning dataset and evaluate our approach across multiple benchmarks—including NLVR2 and SPARTUN—demonstrating consistent and significant improvements in spatial reasoning performance. Our results constitute the first empirical evidence that VLMs’ spatial perception capability can be effectively restored through structured, architecture-agnostic interventions, without retraining or architectural modification.
📝 Abstract
Vision-Language Models (VLMs) excel at identifying and describing objects but struggle with spatial reasoning such as accurately understanding the relative positions of objects. Inspired by the dual-pathway (ventral-dorsal) model of human vision, we investigate why VLMs fail spatial tasks despite strong object recognition capabilities. Our interpretability-driven analysis reveals a critical underlying cause: vision embeddings in VLMs are treated primarily as semantic ``bag-of-tokens,"overshadowing subtle yet crucial positional cues due to their disproportionately large embedding norms. We validate this insight through extensive diagnostic experiments, demonstrating minimal performance impact when token orders or fine-grained spatial details are removed. Guided by these findings, we propose simple, interpretable interventions, including normalizing vision embedding norms and extracting mid-layer spatially rich features, to restore spatial awareness. Empirical results on both our synthetic data and standard benchmarks demonstrate improved spatial reasoning capabilities, highlighting the value of interpretability-informed design choices. Our study not only uncovers fundamental limitations in current VLM architectures but also provides actionable insights for enhancing structured perception of visual scenes.