Do small language models generate realistic variable-quality fake news headlines?

📅 2025-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capability of small language models (SLMs) to generate multi-tiered, high-fidelity fake news headlines under explicit prompting and their evasion potential against existing detection methods. Using controlled prompt engineering, we systematically evaluate 24,000 fake headlines generated by 14 SLMs, employing DistilBERT-based and ensemble classifiers for both quality grading and authenticity classification. Results show that SLMs reliably follow instructions to produce both high- and low-quality fake headlines; however, their outputs exhibit statistically significant semantic and stylistic divergence from authentic news headlines. Crucially, state-of-the-art detectors achieve only 35.2%–63.5% accuracy in identifying these SLM-generated fakes, revealing critical robustness gaps. This work constitutes the first systematic empirical analysis demonstrating the controllability, quality tunability, and detection vulnerability of SLMs in disinformation generation—providing foundational evidence and methodological guidance for developing resilient content safety mechanisms.

Technology Category

Application Category

📝 Abstract
Small language models (SLMs) have the capability for text generation and may potentially be used to generate falsified texts online. This study evaluates 14 SLMs (1.7B-14B parameters) including LLaMA, Gemma, Phi, SmolLM, Mistral, and Granite families in generating perceived low and high quality fake news headlines when explicitly prompted, and whether they appear to be similar to real-world news headlines. Using controlled prompt engineering, 24,000 headlines were generated across low-quality and high-quality deceptive categories. Existing machine learning and deep learning-based news headline quality detectors were then applied against these SLM-generated fake news headlines. SLMs demonstrated high compliance rates with minimal ethical resistance, though there were some occasional exceptions. Headline quality detection using established DistilBERT and bagging classifier models showed that quality misclassification was common, with detection accuracies only ranging from 35.2% to 63.5%. These findings suggest the following: tested SLMs generally are compliant in generating falsified headlines, although there are slight variations in ethical restraints, and the generated headlines did not closely resemble existing primarily human-written content on the web, given the low quality classification accuracy.
Problem

Research questions and friction points this paper is trying to address.

Assessing small language models' fake news headline generation capability
Evaluating quality detection accuracy of machine learning models
Examining ethical compliance and realism in generated headlines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controlled prompt engineering for headline generation
DistilBERT and bagging classifiers for quality detection
Evaluating 14 small language models' fake news capabilities
🔎 Similar Papers
No similar papers found.
A
Austin McCutcheon
Department of Computer Science, Lakehead University, Orillia, Canada
C
Chris Brogly
Department of Computer Science, Lakehead University, Orillia, Canada