Sure! Here's a short and concise title for your paper:"Contamination in Generated Text Detection Benchmarks"

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI-generated text detection benchmarks suffer from data contamination, causing detectors to rely on spurious correlations—such as fixed prefixes or refusal patterns—thereby compromising robustness and generalization. Method: We propose the first data purification framework specifically designed for detection tasks, integrating textual pattern analysis, rule-driven filtering, and adversarial-aware re-cleaning to systematically identify and eliminate systematic biases in synthetic texts. Contribution/Results: Evaluated on the DetectRL benchmark, our purified datasets yield detectors with significantly improved defense rates against direct evasion attacks and markedly reduced false correlations. Purified models exhibit enhanced generalization, demonstrating greater stability across diverse LLMs and domains. The high-quality, contamination-mitigated benchmark dataset is publicly released to support rigorous, trustworthy AI detection research.

Technology Category

Application Category

📝 Abstract
Large language models are increasingly used for many applications. To prevent illicit use, it is desirable to be able to detect AI-generated text. Training and evaluation of such detectors critically depend on suitable benchmark datasets. Several groups took on the tedious work of collecting, curating, and publishing large and diverse datasets for this task. However, it remains an open challenge to ensure high quality in all relevant aspects of such a dataset. For example, the DetectRL benchmark exhibits relatively simple patterns of AI-generation in 98.5% of the Claude-LLM data. These patterns may include introductory words such as"Sure! Here is the academic article abstract:", or instances where the LLM rejects the prompted task. In this work, we demonstrate that detectors trained on such data use such patterns as shortcuts, which facilitates spoofing attacks on the trained detectors. We consequently reprocessed the DetectRL dataset with several cleansing operations. Experiments show that such data cleansing makes direct attacks more difficult. The reprocessed dataset is publicly available.
Problem

Research questions and friction points this paper is trying to address.

Detect AI-generated text contamination in benchmark datasets
Address shortcut learning in detectors from dataset patterns
Improve dataset quality to prevent detector spoofing attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cleansing AI-generated text detection benchmarks
Reprocessing datasets to remove generation patterns
Making spoofing attacks on detectors more difficult
🔎 Similar Papers
No similar papers found.
P
Philipp Dingfelder
IT Security Infrastructures Lab, FAU Erlangen-Nürnberg, Martensstr. 3, 91058 Erlangen, Germany
Christian Riess
Christian Riess
Friedrich-Alexander-University of Erlangen-Nuremberg
Digital ForensicsMultimedia SecurityMachine LearningImage Processing