🤖 AI Summary
This work addresses the prevalence of misleading visualizations resulting from violations of fundamental design principles, a problem exacerbated by existing tools’ lack of contextual understanding and the unreliability of general-purpose large language models in providing actionable guidance. To bridge this gap, the authors propose a novel approach that integrates chart de-rendering, vision-language reasoning, and a knowledge base of visualization principles to reconstruct structured chart representations from images, identify design flaws, and generate interpretable, executable improvement suggestions. The system enables human-in-the-loop interactive optimization and re-rendering. Evaluated on 1,000 charts from the Chart2Code benchmark, it produced 10,452 recommendations, clustered into 10 categories—including axis formatting and color accessibility—demonstrating its effectiveness in enhancing both visualization quality and user literacy.
📝 Abstract
Data visualizations are central to scientific communication, journalism, and everyday decision-making, yet they are frequently prone to errors that can distort interpretation or mislead audiences. Rule-based visualization linters can flag violations, but they miss context and do not suggest meaningful design changes. Directly querying general-purpose LLMs about visualization quality is unreliable: lacking training to follow visualization design principles, they often produce inconsistent or incorrect feedback. In this work, we introduce a framework that combines chart de-rendering, automated analysis, and iterative improvement to deliver actionable, interpretable feedback on visualization design. Our system reconstructs the structure of a chart from an image, identifies design flaws using vision-language reasoning, and proposes concrete modifications supported by established principles in visualization research. Users can selectively apply these improvements and re-render updated figures, creating a feedback loop that promotes both higher-quality visualizations and the development of visualization literacy. In our evaluation on 1,000 charts from the Chart2Code benchmark, the system generated 10,452 design recommendations, which clustered into 10 coherent categories (e.g., axis formatting, color accessibility, legend consistency). These results highlight the promise of LLM-driven recommendation systems for delivering structured, principle-based feedback on visualization design, opening the door to more intelligent and accessible authoring tools.