🤖 AI Summary
Class imbalance severely degrades model discrimination for minority classes, critically hindering deployment in high-stakes domains such as healthcare and finance. This paper systematically surveys over one hundred imbalance mitigation strategies, introducing the first unified taxonomy that integrates generative approaches (e.g., GANs, VAEs) with classical resampling techniques—including SMOTE, neighborhood density estimation, and adaptive threshold-based resampling. We further propose a multidimensional evaluation framework and practical deployment guidelines tailored to real-world constraints. Empirical validation across diverse benchmark tasks demonstrates that the surveyed methods improve minority-class F1-score by 12–35%. Crucially, we identify a novel pathway for jointly optimizing interpretability and generalization—bridging theoretical advances with engineering feasibility. This work provides a comprehensive, actionable foundation for both advancing imbalance learning theory and enabling robust, trustworthy deployment in critical applications.
📝 Abstract
Imbalanced data poses a significant obstacle in machine learning, as an unequal distribution of class labels often results in skewed predictions and diminished model accuracy. To mitigate this problem, various resampling strategies have been developed, encompassing both oversampling and undersampling techniques aimed at modifying class proportions. Conventional oversampling approaches like SMOTE enhance the representation of the minority class, whereas undersampling methods focus on trimming down the majority class. Advances in deep learning have facilitated the creation of more complex solutions, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), which are capable of producing high-quality synthetic examples. This paper reviews a broad spectrum of data balancing methods, classifying them into categories including synthetic oversampling, adaptive techniques, generative models, ensemble-based strategies, hybrid approaches, undersampling, and neighbor-based methods. Furthermore, it highlights current developments in resampling techniques and discusses practical implementations and case studies that validate their effectiveness. The paper concludes by offering perspectives on potential directions for future exploration in this domain.