A Comprehensive Perspective on Explainable AI across the Machine Learning Workflow

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing eXplainable AI (XAI) methods predominantly focus on post-hoc explanations for individual predictions, neglecting upstream (e.g., data curation, modeling choices, validation) and downstream (e.g., stakeholder communication, trust calibration) phases of the ML lifecycle—leading to insufficient user trust. Method: We propose Holistic XAI (HXAI), the first end-to-end framework spanning the entire ML lifecycle, structured around six interdependent pillars: data, features, model, prediction, validation, and communication. It delivers role-specific explanations tailored to diverse stakeholders. Contribution/Results: We introduce a unified XAI taxonomy and a comprehensive evaluation library comprising 112 diagnostic questions, revealing critical gaps in current toolchains. Grounded in human explanation theory, HCI principles, and empirical studies, we design an LLM-driven AI agent that generates dynamic, narrative-style explanations. Extensive experiments demonstrate HXAI’s significant improvements in explanation effectiveness and credibility across stakeholder roles.

Technology Category

Application Category

📝 Abstract
Artificial intelligence is reshaping science and industry, yet many users still regard its models as opaque "black boxes". Conventional explainable artificial-intelligence methods clarify individual predictions but overlook the upstream decisions and downstream quality checks that determine whether insights can be trusted. In this work, we present Holistic Explainable Artificial Intelligence (HXAI), a user-centric framework that embeds explanation into every stage of the data-analysis workflow and tailors those explanations to users. HXAI unifies six components (data, analysis set-up, learning process, model output, model quality, communication channel) into a single taxonomy and aligns each component with the needs of domain experts, data analysts and data scientists. A 112-item question bank covers these needs; our survey of contemporary tools highlights critical coverage gaps. Grounded in theories of human explanation, principles from human-computer interaction and findings from empirical user studies, HXAI identifies the characteristics that make explanations clear, actionable and cognitively manageable. A comprehensive taxonomy operationalises these insights, reducing terminological ambiguity and enabling rigorous coverage analysis of existing toolchains. We further demonstrate how AI agents that embed large-language models can orchestrate diverse explanation techniques, translating technical artifacts into stakeholder-specific narratives that bridge the gap between AI developers and domain experts. Departing from traditional surveys or perspective articles, this work melds concepts from multiple disciplines, lessons from real-world projects and a critical synthesis of the literature to advance a novel, end-to-end viewpoint on transparency, trustworthiness and responsible AI deployment.
Problem

Research questions and friction points this paper is trying to address.

Addressing opacity in AI models as black boxes
Integrating explainability across all stages of AI workflow
Bridging gap between AI developers and domain experts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Holistic Explainable AI framework for all workflow stages
Unified taxonomy aligning components with user needs
AI agents using LLMs for stakeholder-specific explanations
🔎 Similar Papers
No similar papers found.
G
George Paterakis
JADBio Gnosis DA S.A., N.Plastira 100, Heraklion, 70013, Crete, Greece.
A
Andrea Castellani
Honda Research Institute Europe, Carl-Legien-Strasse 30, Offenbach am Main, 63073, Hessia, Germany.
G
George Papoutsoglou
JADBio Gnosis DA S.A., N.Plastira 100, Heraklion, 70013, Crete, Greece.
T
Tobias Rodemann
Honda Research Institute Europe, Carl-Legien-Strasse 30, Offenbach am Main, 63073, Hessia, Germany.
Ioannis Tsamardinos
Ioannis Tsamardinos
Professor, Computer Science Department, University of Crete
Machine LearningData ScienceBioinformaticsCausal DiscoveryArtificial Intelligence