🤖 AI Summary
This study addresses the growing challenge posed by generative artificial intelligence (Gen-AI) texts, which are increasingly indistinguishable from human-written content, thereby threatening academic integrity and information credibility. It presents the first systematic comparison of four neural architectures—Multilayer Perceptron, 1D Convolutional Neural Network (CNN), MobileNet-CNN, and Transformer—for detecting AI-generated text across multiple languages and specialized domains, specifically art and mental health. Leveraging the COLING multilingual benchmark and a newly curated domain-specific dataset, the experiments demonstrate that the proposed supervised neural detectors exhibit superior stability and robustness in cross-lingual and cross-domain settings, significantly outperforming mainstream commercial tools such as ZeroGPT and GPTZero. These findings highlight critical limitations in current detection approaches and offer a promising new direction toward reliable AI-text identification.
📝 Abstract
The rapid proliferation of Large Language Models has significantly increased the difficulty of distinguishing between human-written and AI generated texts, raising critical issues across academic, editorial, and social domains. This paper investigates the problem of AI generated text detection through the design, implementation, and comparative evaluation of multiple machine learning based detectors. Four neural architectures are developed and analyzed: a Multilayer Perceptron, a one-dimensional Convolutional Neural Network, a MobileNet-based CNN, and a Transformer model. The proposed models are benchmarked against widely used online detectors, including ZeroGPT, GPTZero, QuillBot, Originality.AI, Sapling, IsGen, Rephrase, and Writer. Experiments are conducted on the COLING Multilingual Dataset, considering both English and Italian configurations, as well as on an original thematic dataset focused on Art and Mental Health. Results show that supervised detectors achieve more stable and robust performance than commercial tools across different languages and domains, highlighting key strengths and limitations of current detection strategies.