🤖 AI Summary
Interactive visualization editors lack real-time guidance grounded in visual communication principles, limiting design quality. To address this, we propose the first end-to-end feedback framework integrating large language models (LLMs), domain-specific visualization design guidelines, and image-aware perceptual filters to generate actionable, personalized natural language design recommendations. Our method injects structured domain knowledge via prompt engineering and leverages perceptual filters to extract salient visual metrics—enhancing LLM reasoning reliability and grounding suggestions in perceptual evidence. In a longitudinal multi-day study with 13 designers spanning novice to expert levels, our system significantly improved iterative refinement quality and depth of design reflection, receiving high usability ratings. This work provides the first empirical validation of LLMs’ effectiveness in delivering experience-agnostic, principle-based feedback for visualization design—establishing a novel paradigm for intelligent, adaptive design assistance tools.
📝 Abstract
Interactive visualization editors empower users to author visualizations without writing code, but do not provide guidance on the art and craft of effective visual communication. In this paper, we explore the potential of using an off-the-shelf large language models (LLMs) to provide actionable and customized feedback to visualization designers. Our implementation, VISUALIZATIONARY, demonstrates how ChatGPT can be used for this purpose through two key components: a preamble of visualization design guidelines and a suite of perceptual filters that extract salient metrics from a visualization image. We present findings from a longitudinal user study involving 13 visualization designers-6 novices, 4 intermediates, and 3 experts-who authored a new visualization from scratch over several days. Our results indicate that providing guidance in natural language via an LLM can aid even seasoned designers in refining their visualizations. All our supplemental materials are available at https://osf.io/v7hu8.