🤖 AI Summary
Existing content moderation filters largely overlook cultural context, particularly exhibiting insufficient cultural sensitivity in Arabic-language settings. This paper introduces FanarGuard—the first bilingual (Arabic–English) culturally aligned content filter. Methodologically, we construct a high-quality bilingual safety dataset comprising 468K samples, integrating synthetic and publicly available data, and conduct multi-dimensional quality evaluation using LLM-based adjudication and human annotation; we also establish ArabCultBench, the first benchmark for assessing cultural sensitivity in Arabic contexts. Contributions include: (1) the first systematic integration of cultural awareness into Arabic-language content filtering; and (2) superior performance in cultural alignment evaluation—exceeding inter-annotator agreement (κ = 0.82 vs. 0.76)—while matching state-of-the-art filters on mainstream safety benchmarks, thereby significantly improving cross-cultural moderation accuracy.
📝 Abstract
Content moderation filters are a critical safeguard against alignment failures in language models. Yet most existing filters focus narrowly on general safety and overlook cultural context. In this work, we introduce FanarGuard, a bilingual moderation filter that evaluates both safety and cultural alignment in Arabic and English. We construct a dataset of over 468K prompt and response pairs, drawn from synthetic and public datasets, scored by a panel of LLM judges on harmlessness and cultural awareness, and use it to train two filter variants. To rigorously evaluate cultural alignment, we further develop the first benchmark targeting Arabic cultural contexts, comprising over 1k norm-sensitive prompts with LLM-generated responses annotated by human raters. Results show that FanarGuard achieves stronger agreement with human annotations than inter-annotator reliability, while matching the performance of state-of-the-art filters on safety benchmarks. These findings highlight the importance of integrating cultural awareness into moderation and establish FanarGuard as a practical step toward more context-sensitive safeguards.