🤖 AI Summary
Overconfident predictions from image classification models compromise the reliability of pixel-wise attribution, thereby limiting explainable decision support for intraoperative tissue characterization. To address this, we propose the first attribution framework that explicitly integrates risk estimation: it generates a pixel-level attribution distribution through multi-round iterative attribution, constructs an enhanced attribution map using the expectation, and quantifies uncertainty via a coefficient of variation to produce a per-pixel risk map. This approach effectively mitigates the confounding effect of model overconfidence on attribution outcomes, significantly improving attribution robustness and interpretability on both pCLE (probe-based confocal laser endomicroscopy) and ImageNet benchmarks—outperforming state-of-the-art attribution methods. Our core contribution is the first joint optimization of pixel-level attribution and uncertainty modeling, yielding a clinically deployable decision-support tool that simultaneously ensures interpretability and trustworthiness for surgical image classification.
📝 Abstract
The deployment of Machine Learning models intraoperatively for tissue characterisation can assist decision making and guide safe tumour resections. For image classification models, pixel attribution methods are popular to infer explainability. However, overconfidence in deep learning model's predictions translates to overconfidence in pixel attribution. In this paper, we propose the first approach which incorporates risk estimation into a pixel attribution method for improved image classification explainability. The proposed method iteratively applies a classification model with a pixel attribution method to create a volume of PA maps. This volume is used for the first time, to generate a pixel-wise distribution of PA values. We introduce a method to generate an enhanced PA map by estimating the expectation values of the pixel-wise distributions. In addition, the coefficient of variation (CV) is used to estimate pixel-wise risk of this enhanced PA map. Hence, the proposed method not only provides an improved PA map but also produces an estimation of risk on the output PA values. Performance evaluation on probe-based Confocal Laser Endomicroscopy (pCLE) data and ImageNet verifies that our improved explainability method outperforms the state-of-the-art.