🤖 AI Summary
Existing evaluations of vision-language models (VLMs) rely predominantly on behavioral metrics, lacking systematic, representation-level interpretability analysis. Method: We introduce VLM-Lens—the first open-source interpretability toolkit supporting multiple VLM versions—featuring a unified YAML-based configuration interface to extract intermediate hidden representations from arbitrary layers while abstracting architectural heterogeneity; its modular design accommodates 16 mainstream VLMs and over 30 variants, integrates diverse interpretability methods, and enables zero-code extension to new models. Contribution/Results: Empirical evaluation demonstrates VLM-Lens’s effectiveness in cross-model and cross-layer conceptual representation analysis, uncovering hierarchical evolution patterns and systematic differences in concept activation profiles. The toolkit establishes a reproducible, extensible foundation for probing the internal mechanisms of VLMs.
📝 Abstract
We introduce VLM-Lens, a toolkit designed to enable systematic benchmarking, analysis, and interpretation of vision-language models (VLMs) by supporting the extraction of intermediate outputs from any layer during the forward pass of open-source VLMs. VLM-Lens provides a unified, YAML-configurable interface that abstracts away model-specific complexities and supports user-friendly operation across diverse VLMs. It currently supports 16 state-of-the-art base VLMs and their over 30 variants, and is extensible to accommodate new models without changing the core logic. The toolkit integrates easily with various interpretability and analysis methods. We demonstrate its usage with two simple analytical experiments, revealing systematic differences in the hidden representations of VLMs across layers and target concepts. VLM-Lens is released as an open-sourced project to accelerate community efforts in understanding and improving VLMs.