A Self supervised learning framework for imbalanced medical imaging datasets

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of scarce annotations and severe class imbalance in medical image analysis by proposing an Asymmetric Multi-Image Multi-View (AMIMV) framework based on self-supervised learning. The method constructs multi-view sample pairs through a novel asymmetric augmentation strategy and systematically evaluates the robustness of various self-supervised algorithms under diverse long-tailed distributions. Extensive experiments across 11 medical imaging datasets, including MedMNIST, demonstrate that the proposed approach significantly improves classification performance, achieving accuracy gains of 4.25%, 1.88%, and 3.10% on RetinaMNIST, TissueMNIST, and DermaMNIST, respectively. These results validate the effectiveness and superiority of AMIMV in scenarios with limited labeled data and extreme class imbalance.

Technology Category

Application Category

📝 Abstract
Two problems often plague medical imaging analysis: 1) Non-availability of large quantities of labeled training data, and 2) Dealing with imbalanced data, i.e., abundant data are available for frequent classes, whereas data are highly limited for the rare class. Self supervised learning (SSL) methods have been proposed to deal with the first problem to a certain extent, but the issue of investigating the robustness of SSL to imbalanced data has rarely been addressed in the domain of medical image classification. In this work, we make the following contributions: 1) The MIMV method proposed by us in an earlier work is extended with a new augmentation strategy to construct asymmetric multi-image, multi-view (AMIMV) pairs to address both data scarcity and dataset imbalance in medical image classification. 2) We carry out a data analysis to evaluate the robustness of AMIMV under varying degrees of class imbalance in medical imaging . 3) We evaluate eight representative SSL methods in 11 medical imaging datasets (MedMNIST) under long-tailed distributions and limited supervision. Our experimental results on the MedMNIST dataset show an improvement of 4.25% on retinaMNIST, 1.88% on tissueMNIST, and 3.1% on DermaMNIST.
Problem

Research questions and friction points this paper is trying to address.

self-supervised learning
imbalanced data
medical imaging
class imbalance
limited labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-supervised learning
class imbalance
medical imaging
asymmetric multi-image multi-view
data augmentation
🔎 Similar Papers
No similar papers found.
Y
Yash Kumar Sharma
Artificial Intelligence lab, School of Computer & Information Sciences, University of Hyderabad, Hyderabad 500046, India
C
Charan Ramtej Kodi
Artificial Intelligence lab, School of Computer & Information Sciences, University of Hyderabad, Hyderabad 500046, India
Vineet Padmanabhan
Vineet Padmanabhan
Professor of Computer Science, University of Hyderabad
Artificial Intelligence