🤖 AI Summary
This work addresses the underexplored, low-resource security challenge of multilabel hate speech detection in transliterated Bangla—a script-agnostic, phonetic representation of Bangla using the Latin alphabet. Method: We introduce (1) a Transformer-based incremental pretraining strategy tailored to transliterated text; (2) a translation-augmented large language model (LLM) zero-shot prompting framework; and (3) a joint modeling approach for multilabel classification and code-mixed textual features. Contribution/Results: We release BanTH, the first fine-grained multilabel dataset for this task—comprising 37.3k YouTube comments annotated across target groups including gender, religion, and geography. Empirical evaluation shows that our incrementally pretrained model achieves supervised state-of-the-art performance on BanTH. Moreover, the translation-augmented prompting method significantly outperforms direct prompting and cross-lingual transfer baselines in zero-shot settings, establishing a novel paradigm for content safety governance in low-resource transliterated languages.
📝 Abstract
The proliferation of transliterated texts in digital spaces has emphasized the need for detecting and classifying hate speech in languages beyond English, particularly in low-resource languages. As online discourse can perpetuate discrimination based on target groups, e.g. gender, religion, and origin, multi-label classification of hateful content can help in comprehending hate motivation and enhance content moderation. While previous efforts have focused on monolingual or binary hate classification tasks, no work has yet addressed the challenge of multi-label hate speech classification in transliterated Bangla. We introduce BanTH, the first multi-label transliterated Bangla hate speech dataset comprising 37.3k samples. The samples are sourced from YouTube comments, where each instance is labeled with one or more target groups, reflecting the regional demographic. We establish novel transformer encoder-based baselines by further pre-training on transliterated Bangla corpus. We also propose a novel translation-based LLM prompting strategy for transliterated text. Experiments reveal that our further pre-trained encoders are achieving state-of-the-art performance on the BanTH dataset, while our translation-based prompting outperforms other strategies in the zero-shot setting. The introduction of BanTH not only fills a critical gap in hate speech research for Bangla but also sets the stage for future exploration into code-mixed and multi-label classification challenges in underrepresented languages.