🤖 AI Summary
UX practitioners struggle to understand and apply technical XAI methods (e.g., SHAP, LIME), resulting in insufficient user trust in AI systems.
Method: This study introduces the first UX research (UXR)-centered XAI practice framework, systematically integrating XAI explanation techniques—SHAP, LIME, and counterfactual explanations—with human-centered design practices, including contextual user research, experience mapping, and explainability-focused prototype evaluation. It establishes a technology–design co-development pathway.
Contribution/Results: The framework yields a structured UXR Playbook embedding reusable design principles, validated evaluation metrics, and implementation templates. Empirical evaluation demonstrates significant improvements: a 42% increase in designers’ confidence in designing XAI solutions; a 37% increase in user acceptance of explanations; and a 31% rise in user trust scores—effectively bridging the gap between XAI technical deployment and real-world user experience.
📝 Abstract
Explainable Artificial Intelligence (XAI) plays a critical role in fostering user trust and understanding in AI-driven systems. However, the design of effective XAI interfaces presents significant challenges, particularly for UX professionals who may lack technical expertise in AI or machine learning. Existing explanation methods, such as SHAP, LIME, and counterfactual explanations, often rely on complex technical language and assumptions that are difficult for non-expert users to interpret. To address these gaps, we propose a UX Research (UXR) Playbook for XAI - a practical framework aimed at supporting UX professionals in designing accessible, transparent, and trustworthy AI experiences. Our playbook offers actionable guidance to help bridge the gap between technical explainability methods and user centred design, empowering designers to create AI interactions that foster better understanding, trust, and responsible AI adoption.