The Perils of Chart Deception: How Misleading Visualizations Affect Vision-Language Models

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the robustness of vision-language models (VLMs) against misleading data visualizations. To address this problem, we systematically design eight representative classes of visual deception—including axis truncation, distorted scaling, and misleading color encoding—and conduct the first large-scale, joint quantitative and qualitative evaluation across ten state-of-the-art VLMs. Results demonstrate that most models exhibit significant vulnerability: given identical deceptive charts, they consistently generate semantically incorrect interpretations, revealing structural weaknesses in their visual reasoning capabilities. Our work not only identifies critical blind spots in VLMs’ comprehension of data visualizations but also introduces VisDeceptBench—the first dedicated benchmark for evaluating VLM robustness against misleading charts. Furthermore, we provide empirically grounded insights and methodological guidance to inform the design of deception-resistant mechanisms for future VLM development.

Technology Category

Application Category

📝 Abstract
Information visualizations are powerful tools that help users quickly identify patterns, trends, and outliers, facilitating informed decision-making. However, when visualizations incorporate deceptive design elements-such as truncated or inverted axes, unjustified 3D effects, or violations of best practices-they can mislead viewers and distort understanding, spreading misinformation. While some deceptive tactics are obvious, others subtly manipulate perception while maintaining a facade of legitimacy. As Vision-Language Models (VLMs) are increasingly used to interpret visualizations, especially by non-expert users, it is critical to understand how susceptible these models are to deceptive visual designs. In this study, we conduct an in-depth evaluation of VLMs' ability to interpret misleading visualizations. By analyzing over 16,000 responses from ten different models across eight distinct types of misleading chart designs, we demonstrate that most VLMs are deceived by them. This leads to altered interpretations of charts, despite the underlying data remaining the same. Our findings highlight the need for robust safeguards in VLMs against visual misinformation.
Problem

Research questions and friction points this paper is trying to address.

Evaluates VLMs' susceptibility to deceptive chart designs
Analyzes 16,000 responses across 8 misleading chart types
Highlights need for safeguards against visual misinformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates VLMs on misleading visualizations
Analyzes 16,000 responses across ten models
Highlights need for safeguards in VLMs