IndicSentEval: How Effectively do Multilingual Transformer Models encode Linguistic Properties for Indic Languages?

📅 2024-10-03
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the linguistic encoding capabilities—across eight surface, syntactic, and semantic properties—and robustness under thirteen text perturbations of multilingual Transformer models, with a focus on Indo-Aryan languages. To this end, we introduce IndicSentEval, the first multilingual probing benchmark covering six Indian languages and comprising ~47K sentences, and propose a perturbation-sensitive cross-lingual probing paradigm. Leveraging linear probing on word embeddings, linguistically motivated adversarial perturbations (e.g., targeted part-of-speech deletion/preservation), and multilingual annotated data, we uncover, for the first time, a cross-lingual dissociation between encoding capacity and robustness: domain-specific Indic models achieve +12.3% average probing accuracy, whereas general-purpose multilingual models exhibit superior robustness—outperforming specialized models by 9.7–15.1% under critical perturbations. Results further reveal that while general-purpose models maintain stable performance on English, their behavior on Indian languages is markedly inconsistent.

Technology Category

Application Category

📝 Abstract
Transformer-based models have revolutionized the field of natural language processing. To understand why they perform so well and to assess their reliability, several studies have focused on questions such as: Which linguistic properties are encoded by these models, and to what extent? How robust are these models in encoding linguistic properties when faced with perturbations in the input text? However, these studies have mainly focused on BERT and the English language. In this paper, we investigate similar questions regarding encoding capability and robustness for 8 linguistic properties across 13 different perturbations in 6 Indic languages, using 9 multilingual Transformer models (7 universal and 2 Indic-specific). To conduct this study, we introduce a novel multilingual benchmark dataset, IndicSentEval, containing approximately $sim$47K sentences. Surprisingly, our probing analysis of surface, syntactic, and semantic properties reveals that while almost all multilingual models demonstrate consistent encoding performance for English, they show mixed results for Indic languages. As expected, Indic-specific multilingual models capture linguistic properties in Indic languages better than universal models. Intriguingly, universal models broadly exhibit better robustness compared to Indic-specific models, particularly under perturbations such as dropping both nouns and verbs, dropping only verbs, or keeping only nouns. Overall, this study provides valuable insights into probing and perturbation-specific strengths and weaknesses of popular multilingual Transformer-based models for different Indic languages. We make our code and dataset publicly available [https://tinyurl.com/IndicSentEval}].
Problem

Research questions and friction points this paper is trying to address.

Evaluating linguistic property encoding in multilingual transformers for Indic languages
Assessing model robustness against text perturbations across six Indic languages
Comparing universal versus Indic-specific multilingual models' linguistic capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probing multilingual models for Indic linguistic properties
Introducing IndicSentEval benchmark with 47K sentences
Comparing universal and Indic-specific model robustness
🔎 Similar Papers
No similar papers found.
A
Akhilesh Aravapalli
IIIT Hyderabad, India
M
Mounika Marreddy
University of Bonn, Germany
S
S. Oota
Inria, France
Radhika Mamidi
Radhika Mamidi
International Institute of Information Technology, Hyderabad
Machine TranslationDialog SystemsSentiment AnalysisComputational Humour
M
Manish Gupta
Microsoft, Hyderabad, India