IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for Indic Languages

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Indian languages are severely underrepresented in question answering (QA) systems, lacking dedicated multilingual QA resources. Method: We introduce IndicSQuAD—the first multilingual extractive QA dataset covering nine major Indian languages—systematically translated and aligned from SQuAD. We propose a multilingual collaborative answer span alignment mechanism, integrating cross-lingual boundary calibration and MahaSQuAD-enhanced translation to ensure high-fidelity translation and annotation consistency. Contribution/Results: We establish the first unified, hierarchical, and reproducible QA benchmark for Indian languages, releasing the full dataset (train/dev/test splits) alongside fine-tuned monolingual BERT and MuRIL-BERT models. Baseline experiments expose critical bottlenecks in low-resource language QA, particularly in domain transfer and precise answer span localization, thereby laying foundational infrastructure and empirical insights for QA research across Indo-Aryan and Dravidian languages.

Technology Category

Application Category

📝 Abstract
The rapid progress in question-answering (QA) systems has predominantly benefited high-resource languages, leaving Indic languages largely underrepresented despite their vast native speaker base. In this paper, we present IndicSQuAD, a comprehensive multi-lingual extractive QA dataset covering nine major Indic languages, systematically derived from the SQuAD dataset. Building on previous work with MahaSQuAD for Marathi, our approach adapts and extends translation techniques to maintain high linguistic fidelity and accurate answer-span alignment across diverse languages. IndicSQuAD comprises extensive training, validation, and test sets for each language, providing a robust foundation for model development. We evaluate baseline performances using language-specific monolingual BERT models and the multilingual MuRIL-BERT. The results indicate some challenges inherent in low-resource settings. Moreover, our experiments suggest potential directions for future work, including expanding to additional languages, developing domain-specific datasets, and incorporating multimodal data. The dataset and models are publicly shared at https://github.com/l3cube-pune/indic-nlp
Problem

Research questions and friction points this paper is trying to address.

Addressing underrepresentation of Indic languages in QA systems
Creating multilingual QA dataset for nine Indic languages
Evaluating performance challenges in low-resource language settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual QA dataset for nine Indic languages
Translation techniques for linguistic fidelity
Monolingual and multilingual BERT model evaluation
🔎 Similar Papers
No similar papers found.
S
Sharvi Endait
Pune Institute of Computer Technology, Pune; L3Cube Labs, Pune
R
Ruturaj Ghatage
Pune Institute of Computer Technology, Pune; L3Cube Labs, Pune
Aditya Kulkarni
Aditya Kulkarni
Indian Institute of Technology (IIT) Dharwad
CybersecurityPhishingDNS SecurityML and DL
R
Rajlaxmi Patil
Pune Institute of Computer Technology, Pune; L3Cube Labs, Pune
Raviraj Joshi
Raviraj Joshi
Indian Institute of Technology Madras
computer sciencemachine learningnatural language processing