🤖 AI Summary
To address the degradation of model performance in imbalanced classification caused by label noise and complex class distributions, this paper proposes a hyperparameter-free, noise-robust density-aware oversampling method. The approach employs Gaussian kernel density estimation (KDE) to adaptively identify high-density “safe” regions and low-density “noisy” or ambiguous regions; synthetic samples are generated exclusively within safe regions. It further integrates a boundary-aware identification strategy into an enhanced SMOTE framework. Its core innovation lies in a density-driven regional discrimination mechanism that inherently avoids noise contamination, thereby significantly improving class separability and model robustness. Extensive experiments on multiple binary-class benchmark datasets demonstrate that the proposed method consistently outperforms state-of-the-art oversampling techniques across key metrics—including Matthews Correlation Coefficient (MCC), balanced accuracy, and Area Under the Precision-Recall Curve (AUPRC)—particularly under realistic noisy conditions.
📝 Abstract
Imbalanced classification is a significant challenge in machine learning, especially in critical applications like medical diagnosis, fraud detection, and cybersecurity. Traditional oversampling techniques, such as SMOTE, often fail to handle label noise and complex data distributions, leading to reduced classification accuracy. In this paper, we propose GK-SMOTE, a hyperparameter-free, noise-resilient extension of SMOTE, built on Gaussian Kernel Density Estimation (KDE). GK-SMOTE enhances class separability by generating synthetic samples in high-density minority regions, while effectively avoiding noisy or ambiguous areas. This self-adaptive approach uses Gaussian KDE to differentiate between safe and noisy regions, ensuring more accurate sample generation without requiring extensive parameter tuning. Our extensive experiments on diverse binary classification datasets demonstrate that GK-SMOTE outperforms existing state-of-the-art oversampling techniques across key evaluation metrics, including MCC, Balanced Accuracy, and AUPRC. The proposed method offers a robust, efficient solution for imbalanced classification tasks, especially in noisy data environments, making it an attractive choice for real-world applications.