Can AI-Generated Persuasion Be Detected? Persuaficial Benchmark and AI vs. Human Linguistic Differences

๐Ÿ“… 2026-01-08
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the growing risk of misuse associated with AI-generated persuasive texts, which can be particularly difficult to detect due to their subtle and covert nature. To systematically investigate this challenge, the authors introduce Persuaficial, the first benchmark dataset specifically designed for multilingual persuasive text detection. Leveraging controlled generation techniques, multilingual NLP methods, and linguistic feature analysis, the work evaluates the detectability gap between human- and large language modelโ€“generated persuasive content. The findings reveal that while overtly AI-generated persuasive texts are relatively easy to identify, detection performance drops significantly when models employ implicit persuasion strategies. Furthermore, the study uncovers nuanced linguistic differences in how humans and AI deploy subtle persuasive tactics, offering new insights and a foundational resource for developing more interpretable and effective AI-generated text detection systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) can generate highly persuasive text, raising concerns about their misuse for propaganda, manipulation, and other harmful purposes. This leads us to our central question: Is LLM-generated persuasion more difficult to automatically detect than human-written persuasion? To address this, we categorize controllable generation approaches for producing persuasive content with LLMs and introduce Persuaficial, a high-quality multilingual benchmark covering six languages: English, German, Polish, Italian, French and Russian. Using this benchmark, we conduct extensive empirical evaluations comparing human-authored and LLM-generated persuasive texts. We find that although overtly persuasive LLM-generated texts can be easier to detect than human-written ones, subtle LLM-generated persuasion consistently degrades automatic detection performance. Beyond detection performance, we provide the first comprehensive linguistic analysis contrasting human and LLM-generated persuasive texts, offering insights that may guide the development of more interpretable and robust detection tools.
Problem

Research questions and friction points this paper is trying to address.

AI-generated persuasion
detection
large language models
human vs. AI text
persuasive content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Persuaficial
LLM-generated persuasion
multilingual benchmark
automatic detection
linguistic analysis
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Arkadiusz Modzelewski
University of Padua, Italy; Polish-Japanese Academy of Information Technology, Poland; NASK National Research Institute, Poland
P
Pawel Golik
University of Padua, Italy
A
Anna Kolos
NASK National Research Institute, Poland
Giovanni Da San Martino
Giovanni Da San Martino
Associate Professor, Department of Mathematics, University of Padova, Italy
Machine Learning and Natural Language Processing