Explainable AI for Blind and Low-Vision Users: Navigating Trust, Modality, and Interpretability in the Agentic Era

๐Ÿ“… 2026-03-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the limitations of current explainable artificial intelligence (XAI) approaches, which predominantly rely on visual modalities and thus fail to meet the needs of blind and low-vision users for timely, accessible feedback during multi-step tasksโ€”often leading to misjudgments and self-blame. Through user interviews and literature analysis, the work identifies unique explainability requirements in environmental perception and decision-support contexts for this population and proposes a novel non-visual XAI direction centered on multimodal interaction, blame-aware explanations, and participatory design. Integrating qualitative research, cross-modal interaction analysis, and dialogic explanation modeling, the study reveals usersโ€™ strong reliance on linguistic explanations and their tendency toward self-attribution, offering an empirical foundation and roadmap for developing trustworthy, inclusive intelligent agent systems.
๐Ÿ“ Abstract
Explainable Artificial Intelligence (XAI) is critical for ensuring trust and accountability, yet its development remains predominantly visual. For blind and low-vision (BLV) users, the lack of accessible explanations creates a fundamental barrier to the independent use of AI-driven assistive technologies. This problem intensifies as AI systems shift from single-query tools into autonomous agents that take multi-step actions and make consequential decisions across extended task horizons, where a single undetected error can propagate irreversibly before any feedback is available. This paper investigates the unique XAI requirements of the BLV community through a comprehensive analysis of user interviews and contemporary research. By examining usage patterns across environmental perception and decision support, we identify a significant modality gap. Empirical evidence suggests that while BLV users highly value conversational explanations, they frequently experience "self-blame" for AI failures. The paper concludes with a research agenda for accessible Explainable AI in agentic systems, advocating for multimodal interfaces, blame-aware explanation design, and participatory development.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
Blind and Low-Vision Users
Accessibility
Agentic Systems
Modality Gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
blind and low-vision users
agentic systems
multimodal interfaces
blame-aware explanation
๐Ÿ”Ž Similar Papers
No similar papers found.