🤖 AI Summary
This work addresses the critical need for privacy-preserving removal of sensitive data from medical image classification models. We introduce, for the first time, machine unlearning into medical imaging, proposing an efficient unlearning method based on the SalUn algorithm. We systematically evaluate its efficacy on three public medical image datasets—PathMNIST, OrganAMNIST, and BloodMNIST—and investigate how data augmentation enhances unlearning quality and model robustness. Experiments demonstrate that SalUn achieves unlearning performance comparable to full retraining while requiring only local model updates, thereby significantly reducing computational overhead without compromising classification accuracy or generalization capability. This study bridges a key gap in applying machine unlearning to medical imaging and establishes a reproducible, scalable technical framework for governing healthcare AI models under stringent privacy requirements.
📝 Abstract
Machine unlearning aims to remove private or sensitive data from a pre-trained model while preserving the model's robustness. Despite recent advances, this technique has not been explored in medical image classification. This work evaluates the SalUn unlearning model by conducting experiments on the PathMNIST, OrganAMNIST, and BloodMNIST datasets. We also analyse the impact of data augmentation on the quality of unlearning. Results show that SalUn achieves performance close to full retraining, indicating an efficient solution for use in medical applications.