🤖 AI Summary
This paper identifies systemic risks in deploying generative models for accessible data visualization: visually impaired users suffer from “verification incapacity”—an inability to validate chart content—and “coerced reliance” on potentially biased model descriptions, thereby amplifying algorithmic bias. To investigate, the authors introduce a novel analytical framework centered on these two constructs and design a cross-model “telephone game” experiment to simulate multi-round propagation of visualization interpretations. Results demonstrate that models systematically introduce and reinforce both semantic and statistical biases during successive re-descriptions, significantly degrading descriptive reliability. Based on these findings, the paper proposes a tripartite co-design pathway: enhancing model transparency, granting users controllable editing capabilities, and institutionalizing disability-led participatory design. This work provides both theoretical grounding and actionable guidelines for developing trustworthy, accessible AI systems.
📝 Abstract
This paper is a collaborative piece between two worlds of expertise in the field of data visualization: accessibility and bias. In particular, the rise of generative models playing a role in accessibility is a worrying trend for data visualization. These models are increasingly used to help author visualizations as well as generate descriptions of existing visualizations for people who are blind, low vision, or use assistive technologies such as screen readers. Sighted human-to-human bias has already been established as an area of concern for theory, research, and design in data visualization. But what happens when someone is unable to verify the model output or adequately interrogate algorithmic bias, such as a context where a blind person asks a model to describe a chart for them? In such scenarios, trust from the user is not earned, rather reliance is compelled by the model-to-human relationship. In this work, we explored the dangers of AI-generated descriptions for accessibility, playing a game of telephone between models, observing bias production in model interpretation, and re-interpretation of a data visualization. We unpack ways that model failure in visualization is especially problematic for users with visual impairments, and suggest directions forward for three distinct readers of this piece: technologists who build model-assisted interfaces for end users, users with disabilities leveraging models for their own purposes, and researchers concerned with bias, accessibility, or visualization.