Interpretability-Aware Pruning for Efficient Medical Image Analysis

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural models for medical image analysis suffer from excessive parameter counts and poor interpretability, hindering clinical deployment. To address this, we propose an interpretability-aware structured pruning framework that—uniquely—leverages attribution methods (DL-Backprop, LRP, and Integrated Gradients) as dynamic pruning guidance signals. These signals identify and preserve neuron-level components critical for clinical diagnosis, enabling simultaneous model compression and joint optimization of predictive performance and decision transparency. Evaluated across multiple medical image classification benchmarks, our method achieves aggressive pruning (>50% parameter reduction) with negligible accuracy degradation (<0.5% drop), while substantially improving inference efficiency and interpretability. The framework establishes a novel paradigm for deploying lightweight, trustworthy AI models in clinical settings.

Technology Category

Application Category

📝 Abstract
Deep learning has driven significant advances in medical image analysis, yet its adoption in clinical practice remains constrained by the large size and lack of transparency in modern models. Advances in interpretability techniques such as DL-Backtrace, Layer-wise Relevance Propagation, and Integrated Gradients make it possible to assess the contribution of individual components within neural networks trained on medical imaging tasks. In this work, we introduce an interpretability-guided pruning framework that reduces model complexity while preserving both predictive performance and transparency. By selectively retaining only the most relevant parts of each layer, our method enables targeted compression that maintains clinically meaningful representations. Experiments across multiple medical image classification benchmarks demonstrate that this approach achieves high compression rates with minimal loss in accuracy, paving the way for lightweight, interpretable models suited for real-world deployment in healthcare settings.
Problem

Research questions and friction points this paper is trying to address.

Reducing model complexity while maintaining performance and transparency
Selectively retaining relevant parts for clinically meaningful representations
Achieving high compression rates with minimal accuracy loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpretability-guided pruning framework reduces complexity
Selectively retains most relevant layer parts
Achieves high compression with minimal accuracy loss
🔎 Similar Papers
No similar papers found.