🤖 AI Summary
This study addresses the human-centered explainability challenge in interactive information systems, aiming to enhance users’ comprehension, interpretation, and critical evaluation of AI outputs to support informed decision-making. Following the PRISMA guidelines, we systematically screened and structurally coded 100 peer-reviewed articles. Based on this analysis, we propose a novel five-dimensional conceptual model of explainability—comprising transparency, causality, relevance, controllability, and actionability—and introduce the first user-needs-driven taxonomy for explanation design. Furthermore, we identify six user-centered dimensions for explainability evaluation—the first such classification in the literature. These contributions advance explainability research from ad hoc, experience-based practice toward systematic, theory-grounded inquiry. The resulting integrative framework provides both theoretical rigor and practical guidance for designing transparent, trustworthy, and responsible interactive information systems.
📝 Abstract
Human-centered explainability has become a critical foundation for the responsible development of interactive information systems, where users must be able to understand, interpret, and scrutinize AI-driven outputs to make informed decisions. This systematic survey of literature aims to characterize recent progress in user studies on explainability in interactive information systems by reviewing how explainability has been conceptualized, designed, and evaluated in practice. Following PRISMA guidelines, eight academic databases were searched, and 100 relevant articles were identified. A structural encoding approach was then utilized to extract and synthesize insights from these articles. The main contributions include 1) five dimensions that researchers have used to conceptualize explainability; 2) a classification scheme of explanation designs; 3) a categorization of explainability measurements into six user-centered dimensions. The review concludes by reflecting on ongoing challenges and providing recommendations for future exploration of related issues. The findings shed light on the theoretical foundations of human-centered explainability, informing the design of interactive information systems that better align with diverse user needs and promoting the development of systems that are transparent, trustworthy, and accountable.