!MSA at BAREC Shared Task 2025: Ensembling Arabic Transformers for Readability Assessment

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenges of class imbalance and scarce annotated data in fine-grained Arabic readability assessment. Methodologically, it proposes a multi-model ensemble framework comprising four Arabic pre-trained language models—AraBERTv2, AraELECTRA, MARBERT, and CAMeLBERT—integrated via a confidence-weighted fusion strategy. To enhance robustness, the framework incorporates diverse loss functions, re-annotation of the SAMER corpus, high-quality synthetic data generation using Gemini 2.5 Flash, and a label correction mechanism. Its key innovations lie in modeling architectural diversity across Arabic LMs and introducing a rarity-aware augmentation paradigm targeting underrepresented readability levels. Evaluated on both sentence-level and document-level fine-grained readability tasks, the approach achieves first place across all six subtracks, attaining quadratic weighted kappa (QWK) scores of 87.5% and 87.4%, respectively; post-processing further improves performance by 6.3%. These results demonstrate the method’s effectiveness and strong generalization capability.

Technology Category

Application Category

📝 Abstract
We present MSAs winning system for the BAREC 2025 Shared Task on fine-grained Arabic readability assessment, achieving first place in six of six tracks. Our approach is a confidence-weighted ensemble of four complementary transformer models (AraBERTv2, AraELECTRA, MARBERT, and CAMeLBERT) each fine-tuned with distinct loss functions to capture diverse readability signals. To tackle severe class imbalance and data scarcity, we applied weighted training, advanced preprocessing, SAMER corpus relabeling with our strongest model, and synthetic data generation via Gemini 2.5 Flash, adding about 10,000 rare-level samples. A targeted post-processing step corrected prediction distribution skew, delivering a 6.3 percent Quadratic Weighted Kappa (QWK) gain. Our system reached 87.5 percent QWK at the sentence level and 87.4 percent at the document level, demonstrating the power of model and loss diversity, confidence-informed fusion, and intelligent augmentation for robust Arabic readability prediction.
Problem

Research questions and friction points this paper is trying to address.

Ensembling Arabic transformers for fine-grained readability assessment
Addressing class imbalance and data scarcity in Arabic texts
Correcting prediction distribution skew for improved accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble of four transformer models
Synthetic data generation via Gemini
Targeted post-processing for distribution correction
🔎 Similar Papers
No similar papers found.
Mohamed Basem
Mohamed Basem
Student at Computer Science, MSA University
M
Mohamed Younes
Faculty of Computer Science, MSA University, Egypt
S
Seif Ahmed
Faculty of Computer Science, MSA University, Egypt
A
Abdelrahman Moustafa
Faculty of Computer Science, MSA University, Egypt