๐ค AI Summary
This study addresses the limitations of current explainable artificial intelligence (XAI) approaches, which predominantly rely on visual modalities and thus fail to meet the needs of blind and low-vision users for timely, accessible feedback during multi-step tasksโoften leading to misjudgments and self-blame. Through user interviews and literature analysis, the work identifies unique explainability requirements in environmental perception and decision-support contexts for this population and proposes a novel non-visual XAI direction centered on multimodal interaction, blame-aware explanations, and participatory design. Integrating qualitative research, cross-modal interaction analysis, and dialogic explanation modeling, the study reveals usersโ strong reliance on linguistic explanations and their tendency toward self-attribution, offering an empirical foundation and roadmap for developing trustworthy, inclusive intelligent agent systems.
๐ Abstract
Explainable Artificial Intelligence (XAI) is critical for ensuring trust and accountability, yet its development remains predominantly visual. For blind and low-vision (BLV) users, the lack of accessible explanations creates a fundamental barrier to the independent use of AI-driven assistive technologies. This problem intensifies as AI systems shift from single-query tools into autonomous agents that take multi-step actions and make consequential decisions across extended task horizons, where a single undetected error can propagate irreversibly before any feedback is available. This paper investigates the unique XAI requirements of the BLV community through a comprehensive analysis of user interviews and contemporary research. By examining usage patterns across environmental perception and decision support, we identify a significant modality gap. Empirical evidence suggests that while BLV users highly value conversational explanations, they frequently experience "self-blame" for AI failures. The paper concludes with a research agenda for accessible Explainable AI in agentic systems, advocating for multimodal interfaces, blame-aware explanation design, and participatory development.