🤖 AI Summary
To address insufficient generalization in medical image analysis caused by the coexistence of long-tailed class distributions and unseen categories, this paper proposes a long-tailed medical image recognition framework for open-set semi-supervised learning. It is the first to integrate open-set learning with semi-supervised learning, introducing a synergistic mechanism combining feature-level regularization and classifier logit normalization to mitigate distributional bias and small-sample generalization bottlenecks. Additionally, it incorporates consistency regularization, contrastive feature regularization, and a long-tail-aware pseudo-labeling strategy. Evaluated on ISIC2018, ISIC2019, and TissueMNIST, the method achieves 3.2–5.7% improvements in closed-set accuracy and 6.1–9.4% gains in open-set F1-score. It significantly enhances recognition performance for tail classes and improves rejection capability for unknown classes, demonstrating robust generalization under realistic long-tailed and open-world conditions.
📝 Abstract
Many practical medical imaging scenarios include categories that are under-represented but still crucial. The relevance of image recognition models to real-world applications lies in their ability to generalize to these rare classes as well as unseen classes. Real-world generalization requires taking into account the various complexities that can be encountered in the real-world. First, training data is highly imbalanced, which may lead to model exhibiting bias toward the more frequently represented classes. Moreover, real-world data may contain unseen classes that need to be identified, and model performance is affected by the data scarcity. While medical image recognition has been extensively addressed in the literature, current methods do not take into account all the intricacies in the real-world scenarios. To this end, we propose an open-set learning method for highly imbal-anced medical datasets using a semi-supervised approach. Understanding the adverse impact of long-tail distribution at the inherent model characteristics, we implement a reg-ularization strategy at the feature level complemented by a classifier normalization technique. We conduct extensive experiments on the publicly available datasets, ISIC20 18, ISIC2019, and TissueMNIST with various numbers of labelled samples. Our analysis shows that addressing the impact of long-tail data in classification significantly improves the overall performance of the network in terms of closed-set and open-set accuracies on all datasets. Our code and trained models will be made publicly available at https://github.com/Daniyanaj/OpenLTR.