🤖 AI Summary
Existing eXplainable AI (XAI) methods predominantly focus on post-hoc explanations for individual predictions, neglecting upstream (e.g., data curation, modeling choices, validation) and downstream (e.g., stakeholder communication, trust calibration) phases of the ML lifecycle—leading to insufficient user trust. Method: We propose Holistic XAI (HXAI), the first end-to-end framework spanning the entire ML lifecycle, structured around six interdependent pillars: data, features, model, prediction, validation, and communication. It delivers role-specific explanations tailored to diverse stakeholders. Contribution/Results: We introduce a unified XAI taxonomy and a comprehensive evaluation library comprising 112 diagnostic questions, revealing critical gaps in current toolchains. Grounded in human explanation theory, HCI principles, and empirical studies, we design an LLM-driven AI agent that generates dynamic, narrative-style explanations. Extensive experiments demonstrate HXAI’s significant improvements in explanation effectiveness and credibility across stakeholder roles.
📝 Abstract
Artificial intelligence is reshaping science and industry, yet many users still regard its models as opaque "black boxes". Conventional explainable artificial-intelligence methods clarify individual predictions but overlook the upstream decisions and downstream quality checks that determine whether insights can be trusted. In this work, we present Holistic Explainable Artificial Intelligence (HXAI), a user-centric framework that embeds explanation into every stage of the data-analysis workflow and tailors those explanations to users. HXAI unifies six components (data, analysis set-up, learning process, model output, model quality, communication channel) into a single taxonomy and aligns each component with the needs of domain experts, data analysts and data scientists. A 112-item question bank covers these needs; our survey of contemporary tools highlights critical coverage gaps. Grounded in theories of human explanation, principles from human-computer interaction and findings from empirical user studies, HXAI identifies the characteristics that make explanations clear, actionable and cognitively manageable. A comprehensive taxonomy operationalises these insights, reducing terminological ambiguity and enabling rigorous coverage analysis of existing toolchains. We further demonstrate how AI agents that embed large-language models can orchestrate diverse explanation techniques, translating technical artifacts into stakeholder-specific narratives that bridge the gap between AI developers and domain experts. Departing from traditional surveys or perspective articles, this work melds concepts from multiple disciplines, lessons from real-world projects and a critical synthesis of the literature to advance a novel, end-to-end viewpoint on transparency, trustworthiness and responsible AI deployment.