🤖 AI Summary
Existing Chinese hate speech research lacks fine-grained span-level annotations and fails to identify malicious slang, leading to suboptimal toxicity detection accuracy. Method: We introduce STATE ToxiCN—the first Chinese fine-grained span-level hate speech benchmark—featuring the novel Target-Argument-Hateful-Group quadruple annotation schema, establishing a target-aware toxicity extraction paradigm. Annotation employs human collaboration under strict span-level guidelines; we further design an LLM-based zero-/few-shot evaluation framework to systematically assess mainstream large language models’ capability in detecting Chinese malicious slang. Contribution/Results: Experiments reveal significant deficiencies in current models’ ability to identify target-associated toxicity. LLMs exhibit low accuracy and poor generalization on Chinese slang detection. This work provides a new benchmark and methodological foundation for fine-grained Chinese hate speech analysis.
📝 Abstract
The proliferation of hate speech has caused significant harm to society. The intensity and directionality of hate are closely tied to the target and argument it is associated with. However, research on hate speech detection in Chinese has lagged behind, and existing datasets lack span-level fine-grained annotations. Furthermore, the lack of research on Chinese hateful slang poses a significant challenge. In this paper, we provide a solution for fine-grained detection of Chinese hate speech. First, we construct a dataset containing Target-Argument-Hateful-Group quadruples (STATE ToxiCN), which is the first span-level Chinese hate speech dataset. Secondly, we evaluate the span-level hate speech detection performance of existing models using STATE ToxiCN. Finally, we conduct the first study on Chinese hateful slang and evaluate the ability of LLMs to detect such expressions. Our work contributes valuable resources and insights to advance span-level hate speech detection in Chinese