🤖 AI Summary
This study addresses the challenge of detecting obfuscated cyber-abusive language in Swahili—a low-resource language—posing significant risks to children’s online safety, which existing systems struggle to identify. To tackle this issue, the work proposes the first automated detection approach tailored for Swahili, leveraging traditional machine learning classifiers including support vector machines, logistic regression, and decision trees. The methodology incorporates SMOTE-based oversampling and hyperparameter optimization to mitigate data scarcity and class imbalance. Experimental results demonstrate strong performance across precision, recall, and F1-score, confirming the viability of conventional machine learning techniques for identifying obfuscated abuse in settings with limited annotated data. This research thus offers a novel pathway and technical foundation for advancing child online safety in low-resource linguistic contexts.
📝 Abstract
The rise of digital technology has dramatically increased the potential for cyberbullying and online abuse, necessitating enhanced measures for detection and prevention, especially among children. This study focuses on detecting abusive obfuscated language in Swahili, a low-resource language that poses unique challenges due to its limited linguistic resources and technological support. Swahili is chosen due to its popularity and being the most widely spoken language in Africa, with over 16 million native speakers and upwards of 100 million speakers in total, spanning regions in East Africa and some parts of the Middle East. We employed machine learning models including Support Vector Machines (SVM), Logistic Regression, and Decision Trees, optimized through rigorous parameter tuning and techniques like Synthetic Minority Over-sampling Technique (SMOTE) to handle data imbalance. Our analysis revealed that, while these models perform well in high-dimensional textual data, our dataset's small size and imbalance limit our findings'generalizability. Precision, recall, and F1 scores were thoroughly analyzed, highlighting the nuanced performance of each model in detecting obfuscated language. This research contributes to the broader discourse on ensuring safer online environments for children, advocating for expanded datasets and advanced machine-learning techniques to improve the effectiveness of cyberbullying detection systems. Future work will focus on enhancing data robustness, exploring transfer learning, and integrating multimodal data to create more comprehensive and culturally sensitive detection mechanisms.