🤖 AI Summary
A research gap exists in hallucination detection for large language models (LLMs) applied to low-resource Indian languages. Method: We introduce IndicHalluBench—the first multilingual hallucination identification and evaluation benchmark for Hindi, Gujarati, Marathi, Odia, and English—comprising 36,047 questions across diverse categories. Our approach integrates human-annotated high-quality data with outputs from multilingual LLMs, proposing a category-sensitive normalized metric and Language-Corrected Fuzzy Score (LCFS) to ensure cross-lingual, cross-model, and cross-task comparability. Contribution/Results: We publicly release the dataset and evaluation code. Evaluating 14 state-of-the-art multilingual LLMs on a 10,265-question subset yields a primary score of 0.23 and an LCFS of 0.385, significantly advancing standardization, reproducibility, and practical utility in multilingual hallucination detection.
📝 Abstract
Large language models (LLMs) are increasingly deployed in multilingual applications but often generate plausible yet incorrect or misleading outputs, known as hallucinations. While hallucination detection has been studied extensively in English, under-resourced Indian languages remain largely unexplored. We present BHRAM-IL, a benchmark for hallucination recognition and assessment in multiple Indian languages, covering Hindi, Gujarati, Marathi, Odia, along with English. The benchmark comprises 36,047 curated questions across nine categories spanning factual, numerical, reasoning, and linguistic tasks. We evaluate 14 state-of-the-art multilingual LLMs on a benchmark subset of 10,265 questions, analyzing cross-lingual and factual hallucinations across languages, models, scales, categories, and domains using category-specific metrics normalized to (0,1) range. Aggregation over all categories and models yields a primary score of 0.23 and a language-corrected fuzzy score of 0.385, demonstrating the usefulness of BHRAM-IL for hallucination-focused evaluation. The dataset, and the code for generation and evaluation are available on GitHub (https://github.com/sambhashana/BHRAM-IL/) and HuggingFace (https://huggingface.co/datasets/sambhashana/BHRAM-IL/) to support future research in multilingual hallucination detection and mitigation.