Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the quantification of uncertainty in explanation outputs within explainable artificial intelligence (XAI), specifically modeling the joint impact of input perturbations and model parameter variations on the explanation function $e_ heta(x,f)$. We propose the first unified framework that formally characterizes uncertainty propagation in XAI. Our analysis reveals, for the first time, systematic failures of mainstream methods—including LIME and SHAP—in capturing explanation uncertainty. To enable rigorous evaluation, we establish a benchmark that enables direct comparison between analytical (first-order uncertainty propagation) and empirical (Monte Carlo variance estimation) approaches, validating their complementary strengths across diverse, heterogeneous datasets. We further introduce an explanation consistency metric to systematically assess robustness. All evaluation protocols and open-source code are publicly released, providing both theoretical foundations and practical tools for reliability assessment in XAI.

Technology Category

Application Category

📝 Abstract
Understanding uncertainty in Explainable AI (XAI) is crucial for building trust and ensuring reliable decision-making in Machine Learning models. This paper introduces a unified framework for quantifying and interpreting Uncertainty in XAI by defining a general explanation function $e_{ heta}(x, f)$ that captures the propagation of uncertainty from key sources: perturbations in input data and model parameters. By using both analytical and empirical estimates of explanation variance, we provide a systematic means of assessing the impact uncertainty on explanations. We illustrate the approach using a first-order uncertainty propagation as the analytical estimator. In a comprehensive evaluation across heterogeneous datasets, we compare analytical and empirical estimates of uncertainty propagation and evaluate their robustness. Extending previous work on inconsistencies in explanations, our experiments identify XAI methods that do not reliably capture and propagate uncertainty. Our findings underscore the importance of uncertainty-aware explanations in high-stakes applications and offer new insights into the limitations of current XAI methods. The code for the experiments can be found in our repository at https://github.com/TeodorChiaburu/UXAI
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty propagation in XAI methods
Compare analytical and empirical uncertainty estimators
Assess robustness of XAI methods in high-stakes applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for XAI uncertainty quantification
Analytical and empirical explanation variance estimates
First-order uncertainty propagation as estimator
🔎 Similar Papers
No similar papers found.
T
Teodor Chiaburu
Berliner Hochschule für Technik, Berlin, Germany
Felix Biessmann
Felix Biessmann
Einstein Center Digital Future, Berlin University of Applied Sciences
Machine Learning
F
Frank Hausser
Berliner Hochschule für Technik, Berlin, Germany