Revealing the impact of synthetic native samples and multi-tasking strategies in Hindi-English code-mixed humour and sarcasm detection

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of humor and sarcasm detection in Hindi-English code-mixed text. To tackle this, we propose three methodological improvements: (1) augmenting training data with monolingual native samples to enhance lexical and syntactic diversity; (2) designing a multi-task learning framework that jointly models humor/sarcasm and hate speech detection, built upon BERT and masked language modeling (MLM) architectures; and (3) exploring few-shot prompting with multilingual large language models (LLMs), including XLM-R and mT5. To our knowledge, this is the first systematic study validating the efficacy of monolingual data injection and cross-task multi-task learning for code-mixed sarcasm identification. Experimental results show that multi-task learning yields the largest gains—improving humor F1 by 10.67% and sarcasm F1 by 12.35%—while few-shot prompting with LLMs delivers limited performance. All code, data splits, and experimental configurations are publicly released to ensure full reproducibility.

Technology Category

Application Category

📝 Abstract
In this paper, we reported our experiments with various strategies to improve code-mixed humour and sarcasm detection. We did all of our experiments for Hindi-English code-mixed scenario, as we have the linguistic expertise for the same. We experimented with three approaches, namely (i) native sample mixing, (ii) multi-task learning (MTL), and (iii) prompting very large multilingual language models (VMLMs). In native sample mixing, we added monolingual task samples in code-mixed training sets. In MTL learning, we relied on native and code-mixed samples of a semantically related task (hate detection in our case). Finally, in our third approach, we evaluated the efficacy of VMLMs via few-shot context prompting. Some interesting findings we got are (i) adding native samples improved humor (raising the F1-score up to 6.76%) and sarcasm (raising the F1-score up to 8.64%) detection, (ii) training MLMs in an MTL framework boosted performance for both humour (raising the F1-score up to 10.67%) and sarcasm (increment up to 12.35% in F1-score) detection, and (iii) prompting VMLMs couldn't outperform the other approaches. Finally, our ablation studies and error analysis discovered the cases where our model is yet to improve. We provided our code for reproducibility.
Problem

Research questions and friction points this paper is trying to address.

Detecting humor and sarcasm in Hindi-English code-mixed text
Improving model performance with native samples and multi-task learning
Evaluating large multilingual models via prompting and fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Native sample mixing with monolingual data
Multi-task learning using hate detection
Prompting and instruction finetuning VMLMs
🔎 Similar Papers
No similar papers found.
Debajyoti Mazumder
Debajyoti Mazumder
Indian Institute of Science Education and Research Bhopal
NLP
A
Aakash Kumar
Intelligent Systems' Lab (ISL), Department of Data Science and Engineering, Indian Institute of Science Education and Research, Bhopal, India
Jasabanta Patro
Jasabanta Patro
Assistant Professor, DSE, IISER Bhopal
NLPSocial Computing