🤖 AI Summary
This work proposes a novel data augmentation framework based on Decision Predicate Graphs (DPG-da) to address the limitations of traditional oversampling methods, which often generate unrealistic, infeasible, or uninterpretable synthetic samples. By integrating interpretable decision predicates extracted from trained models into the oversampling process and embedding domain-specific logical rules, the proposed approach ensures that generated samples are not only diverse but also logically consistent and semantically plausible. Experimental results across multiple synthetic and real-world imbalanced datasets demonstrate that DPG-da significantly outperforms existing oversampling techniques in terms of classification performance while providing transparent and traceable explanations for the synthesized instances.
📝 Abstract
Many machine learning classification tasks involve imbalanced datasets, which are often subject to over-sampling techniques aimed at improving model performance. However, these techniques are prone to generating unrealistic or infeasible samples. Furthermore, they often function as black boxes, lacking interpretability in their procedures. This opacity makes it difficult to track their effectiveness and provide necessary adjustments, and they may ultimately fail to yield significant performance improvements.
To bridge this gap, we introduce the Decision Predicate Graphs for Data Augmentation (DPG-da), a framework that extracts interpretable decision predicates from trained models to capture domain rules and enforce them during sample generation. This design ensures that over-sampled data remain diverse, constraint-satisfying, and interpretable. In experiments on synthetic and real-world benchmark datasets, DPG-da consistently improves classification performance over traditional over-sampling methods, while guaranteeing logical validity and offering clear, interpretable explanations of the over-sampled data.