🤖 AI Summary
To address the reliability challenge of unsupervised out-of-distribution (OOD) detection in medical imaging, this paper proposes a feature masking method based on multi-exit class activation maps (CAMs). By fusing CAMs from multiple network layers, it generates a global-local synergistic inverse-space mask that applies semantic-aware feature perturbations to both in-distribution (ID) and OOD samples; subsequent analysis of their representation sensitivity differences enables OOD discrimination without requiring OOD samples or labels. The core innovations are the first-ever multi-exit CAM fusion mechanism and the inverse-CAM masking strategy, jointly ensuring semantic consistency and discriminative robustness. Evaluated on diverse medical datasets—including ISIC19 and PathMNIST as ID sources, and RSNA Pneumonia, COVID-19, and HeadCT as OOD sources—the method reduces false positive rate at 95% true positive rate (FPR95) by 37% on average over state-of-the-art methods, significantly enhancing safety for clinical deployment.
📝 Abstract
Out-of-distribution (OOD) detection is essential for ensuring the reliability of deep learning models in medical imaging applications. This work is motivated by the observation that class activation maps (CAMs) for in-distribution (ID) data typically emphasize regions that are highly relevant to the model's predictions, whereas OOD data often lacks such focused activations. By masking input images with inverted CAMs, the feature representations of ID data undergo more substantial changes compared to those of OOD data, offering a robust criterion for differentiation. In this paper, we introduce a novel unsupervised OOD detection framework, Multi-Exit Class Activation Map (MECAM), which leverages multi-exit CAMs and feature masking. By utilizing mult-exit networks that combine CAMs from varying resolutions and depths, our method captures both global and local feature representations, thereby enhancing the robustness of OOD detection. We evaluate MECAM on multiple ID datasets, including ISIC19 and PathMNIST, and test its performance against three medical OOD datasets, RSNA Pneumonia, COVID-19, and HeadCT, and one natural image OOD dataset, iSUN. Comprehensive comparisons with state-of-the-art OOD detection methods validate the effectiveness of our approach. Our findings emphasize the potential of multi-exit networks and feature masking for advancing unsupervised OOD detection in medical imaging, paving the way for more reliable and interpretable models in clinical practice.