Hope Speech Detection in Social Media English Corpora: Performance of Traditional and Transformer Models

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the automatic identification of hope speech—text expressing agency, goal-directedness, and inspiration—in English social media corpora. We systematically compare traditional machine learning models (linear and RBF-kernel SVMs, Naïve Bayes, logistic regression) against fine-tuned Transformer-based models (including BERT variants). Experiments under low-resource conditions (limited annotated data) demonstrate that Transformers significantly outperform classical approaches, achieving a weighted F1-score of 0.79 and accuracy of 0.80. This superiority stems from their enhanced capacity to capture subtle semantic cues—such as volitional verbs, future tense markers, and positively framed pathways toward goals. To our knowledge, this work provides the first empirical validation of large language models’ effectiveness and generalizability in low-resource hope speech detection. It establishes a novel paradigm for affective computing and positive psychological language modeling, bridging computational linguistics with well-being–oriented NLP applications.

Technology Category

Application Category

📝 Abstract
The identification of hope speech has become a promised NLP task, considering the need to detect motivational expressions of agency and goal-directed behaviour on social media platforms. This proposal evaluates traditional machine learning models and fine-tuned transformers for a previously split hope speech dataset as train, development and test set. On development test, a linear-kernel SVM and logistic regression both reached a macro-F1 of 0.78; SVM with RBF kernel reached 0.77, and Naïve Bayes hit 0.75. Transformer models delivered better results, the best model achieved weighted precision of 0.82, weighted recall of 0.80, weighted F1 of 0.79, macro F1 of 0.79, and 0.80 accuracy. These results suggest that while optimally configured traditional machine learning models remain agile, transformer architectures detect some subtle semantics of hope to achieve higher precision and recall in hope speech detection, suggesting that larges transformers and LLMs could perform better in small datasets.
Problem

Research questions and friction points this paper is trying to address.

Detecting motivational hope speech in social media English corpora
Evaluating traditional machine learning versus transformer models
Identifying subtle semantic patterns in hope speech expressions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated traditional machine learning models for hope speech
Fine-tuned transformer models for improved detection performance
Used linear-kernel SVM and logistic regression as baselines
🔎 Similar Papers
No similar papers found.