FanarGuard: A Culturally-Aware Moderation Filter for Arabic Language Models

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing content moderation filters largely overlook cultural context, particularly exhibiting insufficient cultural sensitivity in Arabic-language settings. This paper introduces FanarGuard—the first bilingual (Arabic–English) culturally aligned content filter. Methodologically, we construct a high-quality bilingual safety dataset comprising 468K samples, integrating synthetic and publicly available data, and conduct multi-dimensional quality evaluation using LLM-based adjudication and human annotation; we also establish ArabCultBench, the first benchmark for assessing cultural sensitivity in Arabic contexts. Contributions include: (1) the first systematic integration of cultural awareness into Arabic-language content filtering; and (2) superior performance in cultural alignment evaluation—exceeding inter-annotator agreement (κ = 0.82 vs. 0.76)—while matching state-of-the-art filters on mainstream safety benchmarks, thereby significantly improving cross-cultural moderation accuracy.

Technology Category

Application Category

📝 Abstract
Content moderation filters are a critical safeguard against alignment failures in language models. Yet most existing filters focus narrowly on general safety and overlook cultural context. In this work, we introduce FanarGuard, a bilingual moderation filter that evaluates both safety and cultural alignment in Arabic and English. We construct a dataset of over 468K prompt and response pairs, drawn from synthetic and public datasets, scored by a panel of LLM judges on harmlessness and cultural awareness, and use it to train two filter variants. To rigorously evaluate cultural alignment, we further develop the first benchmark targeting Arabic cultural contexts, comprising over 1k norm-sensitive prompts with LLM-generated responses annotated by human raters. Results show that FanarGuard achieves stronger agreement with human annotations than inter-annotator reliability, while matching the performance of state-of-the-art filters on safety benchmarks. These findings highlight the importance of integrating cultural awareness into moderation and establish FanarGuard as a practical step toward more context-sensitive safeguards.
Problem

Research questions and friction points this paper is trying to address.

Developing culturally-aware moderation filters for Arabic language models
Addressing cultural context gaps in existing content moderation systems
Evaluating both safety and cultural alignment in bilingual content filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bilingual moderation filter for Arabic and English
Trained using 468K culturally annotated prompt-response pairs
First benchmark for Arabic cultural alignment evaluation
🔎 Similar Papers
No similar papers found.
M
Masoomali Fatehkia
Qatar Computing Research Institute, HBKU, Doha, Qatar
E
Enes Altinisik
Qatar Computing Research Institute, HBKU, Doha, Qatar
Husrev Taha Sencar
Husrev Taha Sencar
Qatar Computing Research Institute, HBKU
ai safety and securitythreat intelligencesecuritydigital forensicsmultimedia security