BHRAM-IL: A Benchmark for Hallucination Recognition and Assessment in Multiple Indian Languages

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A research gap exists in hallucination detection for large language models (LLMs) applied to low-resource Indian languages. Method: We introduce IndicHalluBench—the first multilingual hallucination identification and evaluation benchmark for Hindi, Gujarati, Marathi, Odia, and English—comprising 36,047 questions across diverse categories. Our approach integrates human-annotated high-quality data with outputs from multilingual LLMs, proposing a category-sensitive normalized metric and Language-Corrected Fuzzy Score (LCFS) to ensure cross-lingual, cross-model, and cross-task comparability. Contribution/Results: We publicly release the dataset and evaluation code. Evaluating 14 state-of-the-art multilingual LLMs on a 10,265-question subset yields a primary score of 0.23 and an LCFS of 0.385, significantly advancing standardization, reproducibility, and practical utility in multilingual hallucination detection.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly deployed in multilingual applications but often generate plausible yet incorrect or misleading outputs, known as hallucinations. While hallucination detection has been studied extensively in English, under-resourced Indian languages remain largely unexplored. We present BHRAM-IL, a benchmark for hallucination recognition and assessment in multiple Indian languages, covering Hindi, Gujarati, Marathi, Odia, along with English. The benchmark comprises 36,047 curated questions across nine categories spanning factual, numerical, reasoning, and linguistic tasks. We evaluate 14 state-of-the-art multilingual LLMs on a benchmark subset of 10,265 questions, analyzing cross-lingual and factual hallucinations across languages, models, scales, categories, and domains using category-specific metrics normalized to (0,1) range. Aggregation over all categories and models yields a primary score of 0.23 and a language-corrected fuzzy score of 0.385, demonstrating the usefulness of BHRAM-IL for hallucination-focused evaluation. The dataset, and the code for generation and evaluation are available on GitHub (https://github.com/sambhashana/BHRAM-IL/) and HuggingFace (https://huggingface.co/datasets/sambhashana/BHRAM-IL/) to support future research in multilingual hallucination detection and mitigation.
Problem

Research questions and friction points this paper is trying to address.

Evaluates hallucination detection in under-resourced Indian languages
Assesses multilingual LLMs on factual and cross-lingual hallucinations
Provides a benchmark dataset for hallucination recognition and assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for hallucination detection in Indian languages
Evaluates multilingual LLMs using category-specific metrics
Provides dataset and code for future research
🔎 Similar Papers
No similar papers found.
Hrishikesh Terdalkar
Hrishikesh Terdalkar
Assistant Professor @ BITS Pilani, Hyderabad | PostDoc LIRIS, UCBL1 | PhD IIT Kanpur
Computational LinguisticsNatural Language ProcessingKnowledge GraphsSoftware
K
Kirtan Bhojani
Department of Electrical and Electronics Engineering, BITS Pilani, Hyderabad Campus
A
Aryan Dongare
Department of Electrical and Electronics Engineering, BITS Pilani, Hyderabad Campus
O
Omm Aditya Behera
Department of Electrical and Electronics Engineering, BITS Pilani, Hyderabad Campus