π€ AI Summary
Existing AI-generated text detectors fail when confronted with humanized outputsβtext semantically preserved but syntactically altered by AI-humanization tools. Method: We propose a robust detection framework featuring (i) a novel data-driven augmentation strategy integrating centering-based preprocessing, multi-source humanization synthesis, and fine-grained semantic fidelity assessment; and (ii) a self-adversarial training paradigm jointly optimizing detection accuracy and adversarial robustness to enhance cross-tool generalization. Results: Evaluated across 19 mainstream humanization tools, our method achieves high detection accuracy while maintaining a false positive rate below 1.2%. Crucially, it demonstrates strong generalization to unseen humanization techniques. This work overcomes the critical limitation of prior detectors in identifying semantically reconstructed text, establishing a scalable, highly robust technical pathway for AI content authentication.
π Abstract
AI humanizers are a new class of online software tools meant to paraphrase and rewrite AI-generated text in a way that allows them to evade AI detection software. We study 19 AI humanizer and paraphrasing tools and qualitatively assess their effects and faithfulness in preserving the meaning of the original text. We show that many existing AI detectors fail to detect humanized text. Finally, we demonstrate a robust model that can detect humanized AI text while maintaining a low false positive rate using a data-centric augmentation approach. We attack our own detector, training our own fine-tuned model optimized against our detector's predictions, and show that our detector's cross-humanizer generalization is sufficient to remain robust to this attack.