Are AI Detectors Good Enough? A Survey on Quality of Datasets With Machine-Generated Texts

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor generalization and inflated benchmark accuracy of AI-generated text detectors in real-world settings. We propose the first quality assessment framework for generative text datasets, grounded in a systematic review and multidimensional diagnostic analysis—encompassing diversity, authenticity, and annotation consistency. Our analysis reveals pervasive distributional shifts and substantial human annotation noise across mainstream evaluation benchmarks, leading to detector failure under realistic conditions. We further identify data bias and insufficient generalization capacity as fundamental performance bottlenecks. To mitigate these issues, we introduce a novel paradigm wherein high-quality AI-generated texts are leveraged to refine detector training and optimize dataset curation. Cross-dataset validation demonstrates that our approach improves detector robustness by 12–28% in AUC, significantly enhancing reliability in practical deployment scenarios.

Technology Category

Application Category

📝 Abstract
The rapid development of autoregressive Large Language Models (LLMs) has significantly improved the quality of generated texts, necessitating reliable machine-generated text detectors. A huge number of detectors and collections with AI fragments have emerged, and several detection methods even showed recognition quality up to 99.9% according to the target metrics in such collections. However, the quality of such detectors tends to drop dramatically in the wild, posing a question: Are detectors actually highly trustworthy or do their high benchmark scores come from the poor quality of evaluation datasets? In this paper, we emphasise the need for robust and qualitative methods for evaluating generated data to be secure against bias and low generalising ability of future model. We present a systematic review of datasets from competitions dedicated to AI-generated content detection and propose methods for evaluating the quality of datasets containing AI-generated fragments. In addition, we discuss the possibility of using high-quality generated data to achieve two goals: improving the training of detection models and improving the training datasets themselves. Our contribution aims to facilitate a better understanding of the dynamics between human and machine text, which will ultimately support the integrity of information in an increasingly automated world.
Problem

Research questions and friction points this paper is trying to address.

AI-generated Text
Detector Performance
Information Reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-generated text detection
Enhanced testing standards
Optimized training datasets
🔎 Similar Papers
No similar papers found.