🤖 AI Summary
This study addresses the challenge of reducing the detectability of AI-generated text (AIGT) and enhancing its robustness against detection systems. Methodologically, it pioneers the integration of SHAP and LIME into AIGT perturbation, enabling interpretable identification of high-impact tokens and guiding four explainability-driven token substitution strategies. Additionally, it introduces a robust, cross-lingual and cross-domain ensemble detector that fuses multi-model predictions and heterogeneous textual features for binary classification. Key contributions include: (1) a fine-grained, explainable AI (XAI)-guided perturbation paradigm that significantly degrades the performance of mainstream single-model detectors; and (2) an ensemble detector achieving AUC > 0.92 across diverse languages and domains, demonstrating strong generalization against various token-level adversarial attacks. The approach advances both AIGT obfuscation and detection robustness through principled interpretability-aware design.
📝 Abstract
Generative models, especially large language models (LLMs), have shown remarkable progress in producing text that appears human-like. However, they often exhibit patterns that make their output easier to detect than text written by humans. In this paper, we investigate how explainable AI (XAI) methods can be used to reduce the detectability of AI-generated text (AIGT) while also introducing a robust ensemble-based detection approach. We begin by training an ensemble classifier to distinguish AIGT from human-written text, then apply SHAP and LIME to identify tokens that most strongly influence its predictions. We propose four explainability-based token replacement strategies to modify these influential tokens. Our findings show that these token replacement approaches can significantly diminish a single classifier's ability to detect AIGT. However, our ensemble classifier maintains strong performance across multiple languages and domains, showing that a multi-model approach can mitigate the impact of token-level manipulations. These results show that XAI methods can make AIGT harder to detect by focusing on the most influential tokens. At the same time, they highlight the need for robust, ensemble-based detection strategies that can adapt to evolving approaches for hiding AIGT.