🤖 AI Summary
Existing MGC detectors over-rely on superficial features and struggle to detect deeply paraphrased long-form text. To address this, we propose a dual-model framework integrating human writing style modeling with explicit discourse structure analysis. Our contributions include: (1) constructing the first long-text-oriented paraphrased LFQA and WP benchmark datasets; (2) designing a differential scoring mechanism and a PDTB-enhanced document-level encoding paradigm; and (3) jointly leveraging MhBART (style-aware) and DTransformer (discourse-structure-aware), augmented by GPT/DIPPER for high-quality synthetic training data generation. Evaluated on paraLFQA, paraWP, and M4 benchmarks, our method achieves absolute accuracy gains of 15.5%, 4.0%, and 1.5%, respectively—outperforming all state-of-the-art approaches. It effectively captures deceptive syntactic patterns and cross-sentence structural anomalies, demonstrating robustness against sophisticated paraphrasing.
📝 Abstract
The availability of high-quality APIs for Large Language Models (LLMs) has facilitated the widespread creation of Machine-Generated Content (MGC), posing challenges such as academic plagiarism and the spread of misinformation. Existing MGC detectors often focus solely on surface-level information, overlooking implicit and structural features. This makes them susceptible to deception by surface-level sentence patterns, particularly for longer texts and in texts that have been subsequently paraphrased. To overcome these challenges, we introduce novel methodologies and datasets. Besides the publicly available dataset Plagbench, we developed the paraphrased Long-Form Question and Answer (paraLFQA) and paraphrased Writing Prompts (paraWP) datasets using GPT and DIPPER, a discourse paraphrasing tool, by extending artifacts from their original versions. To address the challenge of detecting highly similar paraphrased texts, we propose MhBART, an encoder-decoder model designed to emulate human writing style while incorporating a novel difference score mechanism. This model outperforms strong classifier baselines and identifies deceptive sentence patterns. To better capture the structure of longer texts at document level, we propose DTransformer, a model that integrates discourse analysis through PDTB preprocessing to encode structural features. It results in substantial performance gains across both datasets -- 15.5% absolute improvement on paraLFQA, 4% absolute improvement on paraWP, and 1.5% absolute improvement on M4 compared to SOTA approaches.