SLAyiNG: Towards Queer Language Processing

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Queer slang is frequently misclassified as hate speech by NLP systems, primarily due to the absence of high-quality, annotated benchmark datasets. To address this, we introduce SLAyiNG—the first community-driven, expert-collaborative dataset for queer slang—curated from diverse sources including subtitles, social media, and podcasts. Annotation integrates human expertise with OpenAI o3-mini–assisted term extraction, usage identification, and semantic disambiguation. Inter-annotator agreement reaches Krippendorff’s alpha = 0.746, confirming that large language models (LLMs) significantly improve pre-screening efficiency, yet human judgment remains indispensable for nuanced, sensitive semantic interpretation. SLAyiNG fills a critical gap in queer language processing benchmarks and substantially enhances LLMs’ semantic understanding of LGBTQ+ expressions and their ethical sensitivity.

Technology Category

Application Category

📝 Abstract
Knowledge of slang is a desirable feature of LLMs in the context of user interaction, as slang often reflects an individual's social identity. Several works on informal language processing have defined and curated benchmarks for tasks such as detection and identification of slang. In this paper, we focus on queer slang. Queer slang can be mistakenly flagged as hate speech or can evoke negative responses from LLMs during user interaction. Research efforts so far have not focused explicitly on queer slang. In particular, detection and processing of queer slang have not been thoroughly evaluated due to the lack of a high-quality annotated benchmark. To address this gap, we curate SLAyiNG, the first dataset containing annotated queer slang derived from subtitles, social media posts, and podcasts, reflecting real-world usage. We describe our data curation process, including the collection of slang terms and definitions, scraping sources for examples that reflect usage of these terms, and our ongoing annotation process. As preliminary results, we calculate inter-annotator agreement for human annotators and OpenAI's model o3-mini, evaluating performance on the task of sense disambiguation. Reaching an average Krippendorff's alpha of 0.746, we argue that state-of-the-art reasoning models can serve as tools for pre-filtering, but the complex and often sensitive nature of queer language data requires expert and community-driven annotation efforts.
Problem

Research questions and friction points this paper is trying to address.

Addressing the lack of annotated benchmarks for queer slang processing
Preventing queer slang from being misclassified as hate speech
Improving LLM responses to queer slang during user interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curated first annotated queer slang dataset
Collected slang from subtitles social media podcasts
Used reasoning models for pre-filtering annotation
🔎 Similar Papers
No similar papers found.