LLM-Based Multi-Task Bangla Hate Speech Detection: Type, Severity, and Target

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hate speech detection for low-resource languages like Bengali has long been restricted to binary classification, lacking multidimensional, fine-grained modeling across hate type, severity level, and target group. Method: This work introduces BanglaMultiHate—the first and largest manually annotated multilingual hate speech dataset for Bengali—supporting three concurrent tasks: hate type identification, severity grading, and target classification. We systematically evaluate classical machine learning, monolingual pretrained models (BanglaBERT), and large language models (LLMs) under zero-shot prompting and LoRA-based efficient fine-tuning. Results: LoRA-finetuned LLMs achieve performance comparable to BanglaBERT, yet culturally grounded monolingual pretraining proves decisive for robustness. This work establishes a new benchmark and reusable technical pipeline for content moderation in low-resource languages.

Technology Category

Application Category

📝 Abstract
Online social media platforms are central to everyday communication and information seeking. While these platforms serve positive purposes, they also provide fertile ground for the spread of hate speech, offensive language, and bullying content targeting individuals, organizations, and communities. Such content undermines safety, participation, and equity online. Reliable detection systems are therefore needed, especially for low-resource languages where moderation tools are limited. In Bangla, prior work has contributed resources and models, but most are single-task (e.g., binary hate/offense) with limited coverage of multi-facet signals (type, severity, target). We address these gaps by introducing the first multi-task Bangla hate-speech dataset, BanglaMultiHate, one of the largest manually annotated corpus to date. Building on this resource, we conduct a comprehensive, controlled comparison spanning classical baselines, monolingual pretrained models, and LLMs under zero-shot prompting and LoRA fine-tuning. Our experiments assess LLM adaptability in a low-resource setting and reveal a consistent trend: although LoRA-tuned LLMs are competitive with BanglaBERT, culturally and linguistically grounded pretraining remains critical for robust performance. Together, our dataset and findings establish a stronger benchmark for developing culturally aligned moderation tools in low-resource contexts. For reproducibility, we will release the dataset and all related scripts.
Problem

Research questions and friction points this paper is trying to address.

Detecting multi-faceted hate speech in Bangla language
Addressing limited moderation tools for low-resource languages
Evaluating LLM adaptability in low-resource hate speech detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task dataset for Bangla hate speech
LoRA fine-tuning of LLMs for classification
Comparison with monolingual pretrained models
🔎 Similar Papers
No similar papers found.