🤖 AI Summary
This work addresses the challenge of fine-grained classification of extremist content in social media—specifically distinguishing hate speech, incitement, and discrimination. We propose a socio-cultural context modeling method leveraging large language models (LLMs) to enhance discriminative capability across diverse extremist categories. For the first time, we systematically benchmark Llama-2/3 against GPT-3.5/4 under zero-shot and supervised fine-tuning settings. Our results demonstrate that domain adaptation—not model scale—is the decisive factor: after fine-tuning, the open-source Llama-3 achieves an F1 score of 0.72 on the Indian subset (Maronikolakis dataset), approaching GPT-4’s 0.74—reducing their performance gap from 21% (zero-shot) to under 3%. These findings validate that context-aware fine-tuning enables open-source LLMs to attain production-grade performance, offering a reproducible and scalable technical pathway for extremist content moderation in low-resource settings.
📝 Abstract
In recent years, widespread internet adoption and the growth in userbase of various social media platforms have led to an increase in the proliferation of extreme speech online. While traditional language models have demonstrated proficiency in distinguishing between neutral text and non-neutral text (i.e. extreme speech), categorizing the diverse types of extreme speech presents significant challenges. The task of extreme speech classification is particularly nuanced, as it requires a deep understanding of socio-cultural contexts to accurately interpret the intent of the language used by the speaker. Even human annotators often disagree on the appropriate classification of such content, emphasizing the complex and subjective nature of this task. The use of human moderators also presents a scaling issue, necessitating the need for automated systems for extreme speech classification. The recent launch of ChatGPT has drawn global attention to the potential applications of Large Language Models (LLMs) across a diverse variety of tasks. Trained on vast and diverse corpora, and demonstrating the ability to effectively capture and encode contextual information, LLMs emerge as highly promising tools for tackling this specific task of extreme speech classification. In this paper, we leverage the Indian subset of the extreme speech dataset from Maronikolakis et al. (2022) to develop an effective classification framework using LLMs. We evaluate open-source Llama models against closed-source OpenAI models, finding that while pre-trained LLMs show moderate efficacy, fine-tuning with domain-specific data significantly enhances performance, highlighting their adaptability to linguistic and contextual nuances. Although GPT-based models outperform Llama models in zero-shot settings, the performance gap disappears after fine-tuning.