From Multiple-Choice to Extractive QA: A Case Study for English and Arabic

📅 2024-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of extractive question answering (EQA) data for low-resource languages, this work proposes a task reconstruction approach: systematically converting the multilingual multiple-choice QA dataset BELEBELE into a high-quality English–Arabic bilingual parallel EQA resource. Leveraging cross-lingual alignment in annotation guidelines, we construct the first English–Arabic EQA benchmark and extend it to five Arabic dialects. This method drastically reduces annotation costs for low-resource languages without requiring new text collection or answer generation. Experiments on mBERT, XLM-R, and Arabic-BERT demonstrate strong cross-lingual transfer performance in EQA. Our framework is generalizable—applicable to all 120 language variants in BELEBELE—and establishes the first high-quality parallel EQA resource for low-resource machine reading comprehension, thereby bridging a critical data gap in the field.

Technology Category

Application Category

📝 Abstract
The rapid evolution of Natural Language Processing (NLP) has favoured major languages such as English, leaving a significant gap for many others due to limited resources. This is especially evident in the context of data annotation, a task whose importance cannot be underestimated, but which is time-consuming and costly. Thus, any dataset for resource-poor languages is precious, in particular when it is task-specific. Here, we explore the feasibility of repurposing an existing multilingual dataset for a new NLP task: we repurpose a subset of the BELEBELE dataset (Bandarkar et al., 2023), which was designed for multiple-choice question answering (MCQA), to enable the more practical task of extractive QA (EQA) in the style of machine reading comprehension. We present annotation guidelines and a parallel EQA dataset for English and Modern Standard Arabic (MSA). We also present QA evaluation results for several monolingual and cross-lingual QA pairs including English, MSA, and five Arabic dialects. We aim to help others adapt our approach for the remaining 120 BELEBELE language variants, many of which are deemed under-resourced. We also provide a thorough analysis and share insights to deepen understanding of the challenges and opportunities in NLP task reformulation.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Data Utilization
Resource-poor Languages
Extractive Question Answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual Dataset Conversion
Extractive Question Answering
Under-resourced Languages
🔎 Similar Papers
No similar papers found.