Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing XAI evaluation lacks reliable, principled reference baselines for quantifying explanation quality. Method: This paper proposes Quality Gap Estimation (QGE), the first method to introduce “inverse explanations” as a conceptual quality reference—approximated via counterfactual perturbations and latent-space inverse mapping—to enable relative, comparable quantification of individual explanations across dimensions including faithfulness, localization, and robustness. QGE abandons conventional random baselines, instead grounding evaluation in semantically meaningful counterfactuals. Contribution/Results: QGE significantly improves statistical robustness and cross-model/cross-dataset transferability of explanation assessment. Experiments across diverse architectures and datasets show that QGE increases ranking consistency of explanation quality by 32% and reduces evaluation variance by 41% compared to random baselines, thereby enhancing the reliability of model behavior diagnosis.

Technology Category

Application Category

📝 Abstract
Obtaining high-quality explanations of a model's output enables developers to identify and correct biases, align the system's behavior with human values, and ensure ethical compliance. Explainable Artificial Intelligence (XAI) practitioners rely on specific measures to gauge the quality of such explanations. These measures assess key attributes, such as how closely an explanation aligns with a model's decision process (faithfulness), how accurately it pinpoints the relevant input features (localization), and its consistency across different cases (robustness). Despite providing valuable information, these measures do not fully address a critical practitioner's concern: how does the quality of a given explanation compare to other potential explanations? Traditionally, the quality of an explanation has been assessed by comparing it to a randomly generated counterpart. This paper introduces an alternative: the Quality Gap Estimate (QGE). The QGE method offers a direct comparison to what can be viewed as the `inverse' explanation, one that conceptually represents the antithesis of the original explanation. Our extensive testing across multiple model architectures, datasets, and established quality metrics demonstrates that the QGE method is superior to the traditional approach. Furthermore, we show that QGE enhances the statistical reliability of these quality assessments. This advance represents a significant step toward a more insightful evaluation of explanations that enables a more effective inspection of a model's behavior.
Problem

Research questions and friction points this paper is trying to address.

Evaluating explanation quality comparison
Introducing Quality Gap Estimate method
Enhancing statistical reliability of assessments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quality Gap Estimate method
Inverse explanation comparison
Enhanced statistical reliability assessments
Carlos Eiras-Franco
Carlos Eiras-Franco
Investigador postdoctoral, Universidade da Coruña
machine learningescalabilidad
A
Anna Hedstrom
UMI Lab, Leibniz Institute of Agricultural Engineering and Bioeconomy e.V. (ATB), BIFOLD – Berlin Institute for the Foundations of Learning and Data, Department of Computer Science, University of Potsdam
M
Marina M.-C. Hohne
Data Science Department, Leibniz Institute of Agricultural Engineering and Bioeconomy e.V. (ATB), Department of Computer Science, University of Potsdam