🤖 AI Summary
This study addresses the detrimental impact of clickbait headlines on information quality and user trust. The authors propose an interpretable detection method that integrates linguistically motivated features with deep textual embeddings. Specifically, they construct a feature set comprising 15 explicit linguistic cues—such as second-person pronouns, superlatives, numerals, and emphatic punctuation—and combine them with traditional vectorization techniques, word embeddings (Word2Vec, GloVe), and large language model (LLM) embeddings as input to a tree-based classifier (XGBoost). The best-performing model achieves an F1-score of 91%, significantly outperforming baseline approaches including TF-IDF, feature-only models, and prompt-based LLM classification. The code and trained models have been made publicly available.
📝 Abstract
Clickbait headlines degrade the quality of online information and undermine user trust. We present a hybrid approach to clickbait detection that combines transformer-based text embeddings with linguistically motivated informativeness features. Using natural language processing techniques, we evaluate classical vectorizers, word embedding baselines, and large language model embeddings paired with tree-based classifiers. Our best-performing model, XGBoost over embeddings augmented with 15 explicit features, achieves an F1-score of 91\%, outperforming TF-IDF, Word2Vec, GloVe, LLM prompt based classification, and feature-only baselines. The proposed feature set enhances interpretability by highlighting salient linguistic cues such as second-person pronouns, superlatives, numerals, and attention-oriented punctuation, enabling transparent and well-calibrated clickbait predictions. We release code and trained models to support reproducible research.