🤖 AI Summary
Toxic speech detection on social media under data-scarce conditions—particularly with limited labeled examples—remains challenging due to domain shift, class imbalance, and poor generalizability.
Method: We propose an uncertainty-guided toxicity detection firewall that innovatively integrates Bayesian neural networks with active learning for pseudo-label selection. By quantifying predictive uncertainty, the framework iteratively identifies high-confidence samples for self-training, enabling robust fine-tuning across diverse pre-trained language models.
Contribution/Results: In the 5-shot setting, our method achieves a 14.92% absolute F1-score improvement over strong baselines. It significantly enhances cross-domain robustness, adaptability to severe class imbalance, and model transferability. Unlike prior low-resource approaches, our solution is scalable, controllable in terms of confidence calibration, and provides a trustworthy automated framework for content moderation under extreme labeling constraints.
📝 Abstract
With the widespread use of social media, user-generated content has surged on online platforms. When such content includes hateful, abusive, offensive, or cyberbullying behavior, it is classified as toxic speech, posing a significant threat to the online ecosystem's integrity and safety. While manual content moderation is still prevalent, the overwhelming volume of content and the psychological strain on human moderators underscore the need for automated toxic speech detection. Previously proposed detection methods often rely on large annotated datasets; however, acquiring such datasets is both costly and challenging in practice. To address this issue, we propose an uncertainty-guided firewall for toxic speech in few-shot scenarios, U-GIFT, that utilizes self-training to enhance detection performance even when labeled data is limited. Specifically, U-GIFT combines active learning with Bayesian Neural Networks (BNNs) to automatically identify high-quality samples from unlabeled data, prioritizing the selection of pseudo-labels with higher confidence for training based on uncertainty estimates derived from model predictions. Extensive experiments demonstrate that U-GIFT significantly outperforms competitive baselines in few-shot detection scenarios. In the 5-shot setting, it achieves a 14.92% performance improvement over the basic model. Importantly, U-GIFT is user-friendly and adaptable to various pre-trained language models (PLMs). It also exhibits robust performance in scenarios with sample imbalance and cross-domain settings, while showcasing strong generalization across various language applications. We believe that U-GIFT provides an efficient solution for few-shot toxic speech detection, offering substantial support for automated content moderation in cyberspace, thereby acting as a firewall to promote advancements in cybersecurity.