IndicJR: A Judge-Free Benchmark of Jailbreak Robustness in South Asian Languages

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current safety alignment evaluations of large language models predominantly focus on English and contractual constraint scenarios, with a notable absence of systematic studies on robustness against multilingual jailbreak attacks in South Asian languages. This work proposes IndicJR, the first judge-free jailbreak evaluation benchmark tailored for South Asian languages, covering 12 languages spoken by over 2.1 billion people and comprising 45,216 structured and natural prompts. Through multilingual adversarial generation, automated safety detection, and human audit validation, the study reveals three key findings: mainstream models exhibit near-zero refusal rates under natural-language jailbreak attempts; English-based attacks transfer effectively to South Asian languages; and Romanized inputs significantly degrade model safety, showing a correlation of 0.28–0.32 with the Jailbreak Success Rate (JSR) metric.

Technology Category

Application Category

📝 Abstract
Safety alignment of large language models (LLMs) is mostly evaluated in English and contract-bound, leaving multilingual vulnerabilities understudied. We introduce \textbf{Indic Jailbreak Robustness (IJR)}, a judge-free benchmark for adversarial safety across 12 Indic and South Asian languages (2.1 Billion speakers), covering 45216 prompts in JSON (contract-bound) and Free (naturalistic) tracks. IJR reveals three patterns. (1) Contracts inflate refusals but do not stop jailbreaks: in JSON, LLaMA and Sarvam exceed 0.92 JSR, and in Free all models reach 1.0 with refusals collapsing. (2) English to Indic attacks transfer strongly, with format wrappers often outperforming instruction wrappers. (3) Orthography matters: romanized or mixed inputs reduce JSR under JSON, with correlations to romanization share and tokenization (approx 0.28 to 0.32) indicating systematic effects. Human audits confirm detector reliability, and lite-to-full comparisons preserve conclusions. IJR offers a reproducible multilingual stress test revealing risks hidden by English-only, contract-focused evaluations, especially for South Asian users who frequently code-switch and romanize.
Problem

Research questions and friction points this paper is trying to address.

multilingual safety
jailbreak robustness
South Asian languages
large language models
adversarial evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

judge-free benchmark
jailbreak robustness
multilingual safety
Indic languages
romanization effects
🔎 Similar Papers
No similar papers found.