Interpretable Deep Transfer Learning for Breast Ultrasound Cancer Detection: A Multi-Dataset Study

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Early breast cancer screening demands AI-assisted diagnostic methods that are highly robust and interpretable. To address this, we propose an explainable deep transfer learning framework tailored for multi-center breast ultrasound images. Our method integrates multiple backbone architectures—including ResNet-18, EfficientNet-B0, and GoogLeNet—enhances deep features via ensemble fusion with traditional classifiers (SVM/KNN), and employs Grad-CAM to generate interpretable lesion heatmaps. In cross-dataset evaluation, the ResNet-18 variant achieves 99.7% accuracy and 100% sensitivity for malignant lesion detection, substantially outperforming single-dataset baselines. The framework demonstrates strong generalizability and robustness across heterogeneous clinical sites, while its built-in interpretability mechanisms significantly improve clinical trustworthiness. This work establishes a practical paradigm for intelligent ultrasound diagnosis that simultaneously ensures high diagnostic accuracy, broad generalizability, and model transparency.

Technology Category

Application Category

📝 Abstract
Breast cancer remains a leading cause of cancer-related mortality among women worldwide. Ultrasound imaging, widely used due to its safety and cost-effectiveness, plays a key role in early detection, especially in patients with dense breast tissue. This paper presents a comprehensive study on the application of machine learning and deep learning techniques for breast cancer classification using ultrasound images. Using datasets such as BUSI, BUS-BRA, and BrEaST-Lesions USG, we evaluate classical machine learning models (SVM, KNN) and deep convolutional neural networks (ResNet-18, EfficientNet-B0, GoogLeNet). Experimental results show that ResNet-18 achieves the highest accuracy (99.7%) and perfect sensitivity for malignant lesions. Classical ML models, though outperformed by CNNs, achieve competitive performance when enhanced with deep feature extraction. Grad-CAM visualizations further improve model transparency by highlighting diagnostically relevant image regions. These findings support the integration of AI-based diagnostic tools into clinical workflows and demonstrate the feasibility of deploying high-performing, interpretable systems for ultrasound-based breast cancer detection.
Problem

Research questions and friction points this paper is trying to address.

Breast cancer detection using ultrasound images
Evaluating machine and deep learning classification techniques
Enhancing model interpretability for clinical integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep transfer learning with ResNet-18 architecture
Grad-CAM visualizations for model interpretability
Multi-dataset validation using BUSI and BUS-BRA
🔎 Similar Papers
2024-08-29Medical Imaging 2025: Digital and Computational PathologyCitations: 1
M
Mohammad Abbadi
College of Engineering and Information Technology, University of Dubai, Dubai 14143, UAE
Y
Yassine Himeur
College of Engineering and Information Technology, University of Dubai, Dubai 14143, UAE
Shadi Atalla
Shadi Atalla
Associate Professor in Computing & Information Systems, University of Dubai
DATA SCIENCE
W
Wathiq Mansoor
College of Engineering and Information Technology, University of Dubai, Dubai 14143, UAE