ALARB: An Arabic Legal Argument Reasoning Benchmark

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Arabic-language benchmarks predominantly focus on retrieval and comprehension tasks, lacking evaluation datasets designed for multi-step open-ended reasoning—especially in the legal domain. To address this gap, we introduce ALARB, the first Arabic legal multi-step reasoning benchmark, comprising over 13,000 real-world Saudi commercial court judgments. ALARB supports three core tasks: judgment prediction, reasoning chain completion, and statutory provision identification. It is the first dataset to systematically model the interrelationships among legal facts, reasoning processes, and cited statutory provisions. We employ ALARB to instruction-fine-tune a 12B-parameter Arabic large language model. Experimental results demonstrate substantial improvements in both judgment prediction and generation performance, with the fine-tuned model approaching GPT-4o’s capabilities. These findings validate ALARB’s effectiveness in enhancing deep legal reasoning abilities of Arabic LLMs and its strong transfer value across complex legal reasoning tasks.

Technology Category

Application Category

📝 Abstract
We introduce ALARB, a dataset and suite of tasks designed to evaluate the reasoning capabilities of large language models (LLMs) within the Arabic legal domain. While existing Arabic benchmarks cover some knowledge-intensive tasks such as retrieval and understanding, substantial datasets focusing specifically on multistep reasoning for Arabic LLMs, especially in open-ended contexts, are lacking. The dataset comprises over 13K commercial court cases from Saudi Arabia, with each case including the facts presented, the reasoning of the court, the verdict, as well as the cited clauses extracted from the regulatory documents. We define a set of challenging tasks leveraging this dataset and reflecting the complexity of real-world legal reasoning, including verdict prediction, completion of reasoning chains in multistep legal arguments, and identification of relevant regulations based on case facts. We benchmark a representative selection of current open and closed Arabic LLMs on these tasks and demonstrate the dataset's utility for instruction tuning. Notably, we show that instruction-tuning a modest 12B parameter model using ALARB significantly enhances its performance in verdict prediction and Arabic verdict generation, reaching a level comparable to that of GPT-4o.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reasoning capabilities of Arabic legal LLMs
Addressing lack of multistep reasoning datasets for Arabic
Benchmarking legal tasks including verdict prediction and reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created Arabic legal reasoning benchmark dataset
Defined challenging multistep legal reasoning tasks
Instruction-tuned 12B parameter model using ALARB
🔎 Similar Papers
No similar papers found.
H
Harethah Abu Shairah
King Abdullah University of Science and Technology (KAUST)
S
Somayah AlHarbi
THIQAH
A
Abdulaziz AlHussein
THIQAH
S
Sameer Alsabea
King Abdullah University of Science and Technology (KAUST)
O
Omar Shaqaqi
King Abdullah University of Science and Technology (KAUST)
H
Hebah AlShamlan
THIQAH
Omar Knio
Omar Knio
King Abdullah University of Science and Technology
Computational fluid mechanicsoceanic and atmospheric flowschemically- reacting flowenergetic materialsmicrofluidic and n
George Turkiyyah
George Turkiyyah
King Abdullah University of Science and Technology
computational sciencelarge scale problemsreal-time simulation