Bias-Corrected Data Synthesis for Imbalanced Learning

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In class-imbalanced classification, synthesizing minority-class samples often introduces distributional bias, leading to model overfitting and degraded generalization. To address this, we propose a novel framework that estimates and corrects synthesis-induced bias using distributional information from the majority class. Unlike conventional approaches assuming synthesized samples follow the true minority-class distribution, our method leverages structural consistency in majority-class features to construct a provably consistent bias estimator, coupled with dynamic error calibration during training. Theoretically, we derive bounds on the bias estimation error and provide guarantees on improved prediction accuracy. Empirically, extensive experiments on benchmark datasets—including MNIST—demonstrate significant gains in F1-score, AUC, and robustness against label noise. Moreover, the framework naturally extends to multi-task learning and causal inference settings, offering broad applicability without architectural modification.

Technology Category

Application Category

📝 Abstract
Imbalanced data, where the positive samples represent only a small proportion compared to the negative samples, makes it challenging for classification problems to balance the false positive and false negative rates. A common approach to addressing the challenge involves generating synthetic data for the minority group and then training classification models with both observed and synthetic data. However, since the synthetic data depends on the observed data and fails to replicate the original data distribution accurately, prediction accuracy is reduced when the synthetic data is naively treated as the true data. In this paper, we address the bias introduced by synthetic data and provide consistent estimators for this bias by borrowing information from the majority group. We propose a bias correction procedure to mitigate the adverse effects of synthetic data, enhancing prediction accuracy while avoiding overfitting. This procedure is extended to broader scenarios with imbalanced data, such as imbalanced multi-task learning and causal inference. Theoretical properties, including bounds on bias estimation errors and improvements in prediction accuracy, are provided. Simulation results and data analysis on handwritten digit datasets demonstrate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Correcting bias in synthetic data for imbalanced classification problems
Improving prediction accuracy by mitigating synthetic data adverse effects
Extending bias correction to multi-task learning and causal inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias correction procedure for synthetic data
Consistent bias estimators from majority group
Extends to multi-task learning and causal inference
🔎 Similar Papers
No similar papers found.
Pengfei Lyu
Pengfei Lyu
Ph.D student at Northeastern University
Machine LearningComputer visionMulti-modal image processing
Z
Zhengchi Ma
Department of Electrical & Computer Engineering, Duke University
Linjun Zhang
Linjun Zhang
Associate Professor of Statistics, Rutgers University
High-Dimensional StatisticsDeep LearningDifferential PrivacyAlgorithmic Fairness
A
Anru R. Zhang
Department of Biostatistics & Bioinformatics, Duke University; Department of Computer Science, Duke University