🤖 AI Summary
To address the scarcity of labeled data for low-resource languages and the limitations of existing data augmentation methods—including noise injection, semantic drift, and disruption of contextual coherence—this paper proposes an XAI-driven, context-aware data augmentation framework. The method introduces a novel attribution-aligned augmentation strategy selection mechanism, leveraging Grad-CAM and SHAP interpretability feedback to guide contrastive learning–informed conditional GANs in generating decision-relevant, semantically faithful synthetic samples. Additionally, domain-knowledge-enhanced semantic similarity constraints are incorporated to preserve contextual consistency and task relevance during perturbation. Evaluated on ImageNet-1K and CheXpert, the framework improves ResNet-50’s average top-1 accuracy by 2.3%. It also significantly enhances model interpretability, increasing Faithfulness by 31% and Plausibility by 27%. To our knowledge, this is the first approach to achieve dynamic co-optimization of data augmentation and model attribution.