Dataset resulting from the user study on comprehensibility of explainable AI algorithms

📅 2024-10-21
🏛️ Scientific Data
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited comprehensibility of XAI explanations among interdisciplinary users, using a mushroom classification model as a case study. We conducted a multi-group qualitative investigation involving mycologists, data scientists, and scholars from humanities and social sciences (N=39), employing semi-structured interviews, temporal annotation of transcripts, thematic analysis, and pre-testing. This yielded the first empirically grounded XAI interpretability dataset explicitly designed to capture user-background heterogeneity. Key contributions include: (1) empirical identification of domain-specific cognitive gaps and divergent interpretation pathways across user groups; (2) a novel explanation optimization framework grounded in domain-knowledge adaptation and data literacy alignment; and (3) open-sourcing a high-quality, annotated dataset—including interview transcripts, visualization artifacts, coding schemas, and actionable improvement recommendations—to advance human-centered XAI evaluation methodologies and enable reproducible, cross-disciplinary validation.

Technology Category

Application Category

📝 Abstract
This paper introduces a dataset that is the result of a user study on the comprehensibility of explainable artificial intelligence (XAI) algorithms. The study participants were recruited from 149 candidates to form three groups representing experts in the domain of mycology (DE), students with a data science and visualization background (IT) and students from social sciences and humanities (SSH). The main part of the dataset contains 39 transcripts of interviews during which participants were asked to complete a series of tasks and questions related to the interpretation of explanations of decisions of a machine learning model trained to distinguish between edible and inedible mushrooms. The transcripts were complemented with additional data that includes visualizations of explanations presented to the user, results from thematic analysis, recommendations of improvements of explanations provided by the participants, and the initial survey results that allow to determine the domain knowledge of the participant and data analysis literacy. The transcripts were manually tagged to allow for automatic matching between the text and other data related to particular fragments. In the advent of the area of rapid development of XAI techniques, the need for a multidisciplinary qualitative evaluation of explainability is one of the emerging topics in the community. Our dataset allows not only to reproduce the study we conducted, but also to open a wide range of possibilities for the analysis of the material we gathered.
Problem

Research questions and friction points this paper is trying to address.

Evaluating comprehensibility of explainable AI algorithms for diverse user groups
Assessing interpretation of ML model decisions in mushroom classification
Providing multidisciplinary qualitative dataset for XAI technique evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

User study dataset on XAI comprehensibility
Multidisciplinary participant groups for evaluation
Manually tagged transcripts for automated analysis
🔎 Similar Papers
No similar papers found.
Szymon Bobek
Szymon Bobek
Jagiellonian University
explainable artificial intelligence (XAI)artificial intelligencemachine learningcontext aware systemsknowledge engineeri
P
Paloma Korycińska
Institute of Information Studies, Faculty of Management and Social Communication, Jagiellonian University, Krakow, Poland
M
Monika Krakowska
Institute of Information Studies, Faculty of Management and Social Communication, Jagiellonian University, Krakow, Poland
M
Maciej Mozolewski
Jagiellonian Human-Centered AI Lab, Mark Kac Center for Complex Systems Research, Institute of Applied Computer Science, Jagiellonian University, Krakow, Poland
D
Dorota Rak
Institute of Information Studies, Faculty of Management and Social Communication, Jagiellonian University, Krakow, Poland
Magdalena Zych
Magdalena Zych
Institute of Information Studies, Faculty of Management and Social Communication, Jagiellonian University, Krakow, Poland
M
Magdalena Wójcik
Institute of Information Studies, Faculty of Management and Social Communication, Jagiellonian University, Krakow, Poland
Grzegorz J. Nalepa
Grzegorz J. Nalepa
Jagiellonian University, Kraków, Poland
Artificial IntelligenceKnowledge EngineeringExplainable AIData MiningAffective Computing