IndicSafe: A Benchmark for Evaluating Multilingual LLM Safety in South Asia

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical gap in systematically evaluating the safety of multilingual large language models (LLMs) on low-resource, culturally diverse Indo-Aryan languages of South Asia, where significant safety alignment disparities persist. We present IndicSafe, the first safety evaluation benchmark covering 12 Indo-Aryan languages with 6,000 prompts on culturally sensitive topics such as caste, religion, and gender. To enable nuanced assessment, we propose a culture-aware safety evaluation framework featuring prompt-level entropy, category bias scores, and cross-lingual consistency metrics. Experiments reveal that leading models exhibit only 12.8% cross-lingual safety consistency and a SAFE rate variance exceeding 17%, with pervasive over-rejection or false acceptance in low-resource languages. We publicly release IndicSafe to advance research in linguistically and culturally grounded safety alignment.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are deployed in multilingual settings, their safety behavior in culturally diverse, low-resource languages remains poorly understood. We present the first systematic evaluation of LLM safety across 12 Indic languages, spoken by over 1.2 billion people but underrepresented in LLM training data. Using a dataset of 6,000 culturally grounded prompts spanning caste, religion, gender, health, and politics, we assess 10 leading LLMs on translated variants of the prompt. Our analysis reveals significant safety drift: cross-language agreement is just 12.8\%, and \texttt{SAFE} rate variance exceeds 17\% across languages. Some models over-refuse benign prompts in low-resource scripts, overflag politically sensitive topics, while others fail to flag unsafe generations. We quantify these failures using prompt-level entropy, category bias scores, and multilingual consistency indices. Our findings highlight critical safety generalization gaps in multilingual LLMs and show that safety alignment does not transfer evenly across languages. We release \textsc{IndicSafe}, the first benchmark to enable culturally informed safety evaluation for Indic deployments, and advocate for language-aware alignment strategies grounded in regional harms.
Problem

Research questions and friction points this paper is trying to address.

multilingual LLM safety
Indic languages
safety generalization
cultural grounding
low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

multilingual LLM safety
Indic languages
safety alignment
cultural grounding
benchmark
🔎 Similar Papers
No similar papers found.