🤖 AI Summary
This study addresses the growing challenge posed by increasingly realistic AI-generated text to authenticity verification in education, publishing, and digital security. To tackle this issue, the authors construct a billion-scale, multi-domain corpus comprising both human- and AI-generated texts and introduce two novel fine-tuning paradigms—Per LLM and Per LLM Family—for training customized detectors tailored to individual models or entire model families, respectively. Evaluated on a benchmark encompassing 21 prominent large language models, the proposed approach achieves a token-level accuracy of 99.6%, substantially outperforming existing open-source detection methods. This work thus establishes an effective pathway toward high-precision identification of AI-generated content.
📝 Abstract
The rapid progress of large language models has enabled the generation of text that closely resembles human writing, creating challenges for authenticity verification in education, publishing, and digital security. Detecting AI-generated text has therefore become a crucial technical and ethical issue. This paper presents a comprehensive study of AI-generated text detection based on large-scale corpora and novel training strategies. We introduce a 1-billion-token corpus of human-authored texts spanning multiple genres and a 1.9-billion-token corpus of AI-generated texts produced by prompting a variety of LLMs across diverse domains. Using these resources, we develop and evaluate numerous detection models and propose two novel training paradigms: Per LLM and Per LLM family fine-tuning. Across a 100-million-token benchmark covering 21 large language models, our best fine-tuned detector achieves up to $99.6\%$ token-level accuracy, substantially outperforming existing open-source baselines.