🤖 AI Summary
This study addresses the quantification of uncertainty in explanation outputs within explainable artificial intelligence (XAI), specifically modeling the joint impact of input perturbations and model parameter variations on the explanation function $e_ heta(x,f)$. We propose the first unified framework that formally characterizes uncertainty propagation in XAI. Our analysis reveals, for the first time, systematic failures of mainstream methods—including LIME and SHAP—in capturing explanation uncertainty. To enable rigorous evaluation, we establish a benchmark that enables direct comparison between analytical (first-order uncertainty propagation) and empirical (Monte Carlo variance estimation) approaches, validating their complementary strengths across diverse, heterogeneous datasets. We further introduce an explanation consistency metric to systematically assess robustness. All evaluation protocols and open-source code are publicly released, providing both theoretical foundations and practical tools for reliability assessment in XAI.
📝 Abstract
Understanding uncertainty in Explainable AI (XAI) is crucial for building trust and ensuring reliable decision-making in Machine Learning models. This paper introduces a unified framework for quantifying and interpreting Uncertainty in XAI by defining a general explanation function $e_{ heta}(x, f)$ that captures the propagation of uncertainty from key sources: perturbations in input data and model parameters. By using both analytical and empirical estimates of explanation variance, we provide a systematic means of assessing the impact uncertainty on explanations. We illustrate the approach using a first-order uncertainty propagation as the analytical estimator. In a comprehensive evaluation across heterogeneous datasets, we compare analytical and empirical estimates of uncertainty propagation and evaluate their robustness. Extending previous work on inconsistencies in explanations, our experiments identify XAI methods that do not reliably capture and propagate uncertainty. Our findings underscore the importance of uncertainty-aware explanations in high-stakes applications and offer new insights into the limitations of current XAI methods. The code for the experiments can be found in our repository at https://github.com/TeodorChiaburu/UXAI