Evaluation of Hate Speech Detection Using Large Language Models and Geographical Contextualization

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of hate speech detection in multilingual, multi-regional social media by systematically evaluating large language models (LLMs) on this task. We propose the first geography-aware, three-dimensional evaluation framework, assessing binary classification accuracy, geographic context sensitivity, and adversarial robustness—revealing critical trade-offs among these dimensions in current LLMs. Empirical evaluation is conducted on 1,000 samples across five regions using Llama2-13b, CodeLlama-7b, and DeepSeekCoder-6.7b. Results show CodeLlama achieves the highest recall (70.6%) for binary classification; DeepSeekCoder attains the best geographic localization accuracy (63/265 correctly localized); and Llama2 exhibits the highest misclassification rate (62.5%) on adversarial examples. Our study establishes a reproducible, cross-regional, multilingual benchmark for hate speech detection and delivers key insights into model capabilities and limitations in geographically nuanced, adversarial settings.

Technology Category

Application Category

📝 Abstract
The proliferation of hate speech on social media is one of the serious issues that is bringing huge impacts to society: an escalation of violence, discrimination, and social fragmentation. The problem of detecting hate speech is intrinsically multifaceted due to cultural, linguistic, and contextual complexities and adversarial manipulations. In this study, we systematically investigate the performance of LLMs on detecting hate speech across multilingual datasets and diverse geographic contexts. Our work presents a new evaluation framework in three dimensions: binary classification of hate speech, geography-aware contextual detection, and robustness to adversarially generated text. Using a dataset of 1,000 comments from five diverse regions, we evaluate three state-of-the-art LLMs: Llama2 (13b), Codellama (7b), and DeepSeekCoder (6.7b). Codellama had the best binary classification recall with 70.6% and an F1-score of 52.18%, whereas DeepSeekCoder had the best performance in geographic sensitivity, correctly detecting 63 out of 265 locations. The tests for adversarial robustness also showed significant weaknesses; Llama2 misclassified 62.5% of manipulated samples. These results bring to light the trade-offs between accuracy, contextual understanding, and robustness in the current versions of LLMs. This work has thus set the stage for developing contextually aware, multilingual hate speech detection systems by underlining key strengths and limitations, therefore offering actionable insights for future research and real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs for hate speech detection
Assess geographical contextualization in detection
Test robustness against adversarial manipulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for hate speech detection
Geography-aware contextual evaluation
Adversarial robustness assessment
🔎 Similar Papers
No similar papers found.