🤖 AI Summary
Paraphrase detection in Marathi—a morphologically rich, script-diverse, low-resource Indian language—faces significant challenges due to scarce annotated data. Method: We introduce L3Cube-MahaParaphrase, the first large-scale, human-annotated paraphrase detection corpus for Marathi, comprising 8,000 high-quality sentence pairs. Leveraging this resource, we conduct the first systematic evaluation of Transformer-based models (e.g., BERT) on Marathi paraphrase detection via supervised fine-tuning. Contribution/Results: The corpus fills a critical data gap for low-resource Indian languages and enables downstream applications such as question answering and data augmentation. All data and benchmark models are publicly released, establishing a reproducible, multilingual NLP benchmark and foundational infrastructure for future research.
📝 Abstract
Paraphrases are a vital tool to assist language understanding tasks such as question answering, style transfer, semantic parsing, and data augmentation tasks. Indic languages are complex in natural language processing (NLP) due to their rich morphological and syntactic variations, diverse scripts, and limited availability of annotated data. In this work, we present the L3Cube-MahaParaphrase Dataset, a high-quality paraphrase corpus for Marathi, a low resource Indic language, consisting of 8,000 sentence pairs, each annotated by human experts as either Paraphrase (P) or Non-paraphrase (NP). We also present the results of standard transformer-based BERT models on these datasets. The dataset and model are publicly shared at https://github.com/l3cube-pune/MarathiNLP