🤖 AI Summary
Current vision-language models (VLMs) lack systematic evaluation of their ability to perceive core visual attributes of charts—namely purpose (e.g., GUI vs. schematic), visual encoding (e.g., bar chart vs. node-link diagram), and dimensionality (2D vs. 3D)—without textual cues, hindering reliable human-AI visualization collaboration.
Method: We introduce VisType, the first human-cognition-inspired chart classification taxonomy, and conduct a zero-shot evaluation of 13 state-of-the-art VLMs on these three attribute dimensions.
Contribution/Results: VLMs achieve relatively high accuracy in recognizing chart purpose and dimensionality but exhibit significant limitations in fine-grained encoding type discrimination. Performance does not scale monotonically with model size—larger parameter counts do not guarantee improved accuracy. Our findings expose critical bottlenecks in VLMs’ semantic understanding of visualizations, underscoring the necessity of human-in-the-loop supervision and domain-specific adaptation. The study establishes a foundational benchmark and actionable insights for developing trustworthy, visualization-aware AI systems.
📝 Abstract
Vision-language models (VLMs) hold promise for enhancing visualization tools, but effective human-AI collaboration hinges on a shared perceptual understanding of visual content. Prior studies assessed VLM visualization literacy through interpretive tasks, revealing an over-reliance on textual cues rather than genuine visual analysis. Our study investigates a more foundational skill underpinning such literacy: the ability of VLMs to recognize a chart's core visual properties as humans do. We task 13 diverse VLMs with classifying scientific visualizations based solely on visual stimuli, according to three criteria: purpose (e.g., schematic, GUI, visualization), encoding (e.g., bar, point, node-link), and dimensionality (e.g., 2D, 3D). Using expert labels from the human-centric VisType typology as ground truth, we find that VLMs often identify purpose and dimensionality accurately but struggle with specific encoding types. Our preliminary results show that larger models do not always equate to superior performance and highlight the need for careful integration of VLMs in visualization tasks, with human supervision to ensure reliable outcomes.