🤖 AI Summary
Deep learning models are increasingly deployed in critical domains such as healthcare, yet their “black-box” nature hinders clinical trust and adoption. Moreover, existing eXplainable AI (XAI) methods exhibit high inter-method variability in explanations, severely undermining explanation fidelity and practical utility.
Method: We propose the first unified framework jointly optimizing explanation fidelity and comprehensibility. It introduces a lightweight neural “Explanation Optimizer” that adaptively fuses outputs from multiple XAI methods (e.g., Grad-CAM, Integrated Gradients) and performs end-to-end optimization with dual objectives: maximizing fidelity and minimizing explanation complexity.
Contribution/Results: Evaluated on 2D and 3D medical image classification tasks, our method improves explanation fidelity by 63% and 155%, respectively, while substantially reducing explanation complexity. This yields more reliable, clinically actionable interpretations—bridging the gap between technical explainability and real-world medical decision support.
📝 Abstract
The accelerated progress of artificial intelligence (AI) has popularized deep learning models across various domains, yet their inherent opacity poses challenges, particularly in critical fields like healthcare, medicine, and the geosciences. Explainable AI (XAI) has emerged to shed light on these 'black box' models, aiding in deciphering their decision-making processes. However, different XAI methods often produce significantly different explanations, leading to high inter-method variability that increases uncertainty and undermines trust in deep networks' predictions. In this study, we address this challenge by introducing a novel framework designed to enhance the explainability of deep networks through a dual focus on maximizing both accuracy and comprehensibility in the explanations. Our framework integrates outputs from multiple established XAI methods and leverages a non-linear neural network model, termed the 'explanation optimizer,' to construct a unified, optimal explanation. The optimizer evaluates explanations using two key metrics: faithfulness (accuracy in reflecting the network's decisions) and complexity (comprehensibility). By balancing these, it provides accurate and accessible explanations, addressing a key XAI limitation. Experiments on multi-class and binary classification in 2D object and 3D neuroscience imaging confirm its efficacy. Our optimizer achieved faithfulness scores 155% and 63% higher than the best XAI methods in 3D and 2D tasks, respectively, while also reducing complexity for better understanding. These results demonstrate that optimal explanations based on specific quality criteria are achievable, offering a solution to the issue of inter-method variability in the current XAI literature and supporting more trustworthy deep network predictions