Bangla MedER: Multi-BERT Ensemble Approach for the Recognition of Bangla Medical Entity

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of medical named entity recognition (MedNER) research for low-resource languages like Bengali, this work introduces BanglaMedNER—the first high-quality, human-annotated Bengali medical NER dataset. We further propose a Multi-BERT Ensemble model that integrates BERT, DistilBERT, ELECTRA, and RoBERTa through domain-adaptive fine-tuning and majority voting. Experimental results on BanglaMedNER demonstrate an accuracy of 89.58%, outperforming the standalone BERT baseline by 11.80 percentage points and surpassing all comparative baselines. This study bridges dual gaps in low-resource MedNER: it provides the first benchmark dataset for Bengali medical NLP and establishes a reproducible, ensemble-based methodological framework. The dataset and model serve as foundational resources for advancing MedNER in Bengali and other morphologically rich, low-resource languages.

Technology Category

Application Category

📝 Abstract
Medical Entity Recognition (MedER) is an essential NLP task for extracting meaningful entities from the medical corpus. Nowadays, MedER-based research outcomes can remarkably contribute to the development of automated systems in the medical sector, ultimately enhancing patient care and outcomes. While extensive research has been conducted on MedER in English, low-resource languages like Bangla remain underexplored. Our work aims to bridge this gap. For Bangla medical entity recognition, this study first examined a number of transformer models, including BERT, DistilBERT, ELECTRA, and RoBERTa. We also propose a novel Multi-BERT Ensemble approach that outperformed all baseline models with the highest accuracy of 89.58%. Notably, it provides an 11.80% accuracy improvement over the single-layer BERT model, demonstrating its effectiveness for this task. A major challenge in MedER for low-resource languages is the lack of annotated datasets. To address this issue, we developed a high-quality dataset tailored for the Bangla MedER task. The dataset was used to evaluate the effectiveness of our model through multiple performance metrics, demonstrating its robustness and applicability. Our findings highlight the potential of Multi-BERT Ensemble models in improving MedER for Bangla and set the foundation for further advancements in low-resource medical NLP.
Problem

Research questions and friction points this paper is trying to address.

Developing a Multi-BERT Ensemble model for Bangla medical entity recognition
Addressing the lack of annotated datasets for low-resource language MedER
Improving accuracy over single transformer models in Bangla medical NLP
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-BERT Ensemble approach for Bangla medical entity recognition
High-quality annotated dataset tailored for low-resource language
Outperforms baseline models with 89.58% accuracy
🔎 Similar Papers
No similar papers found.
Tanjim Taharat Aurpa
Tanjim Taharat Aurpa
Gazipur Digital University
Natural Language ProcessingSocial Media AnalysisMachine LearningDeep LearningComputer Vision
F
Farzana Akter
Department of IoT and Robotics Engineering, University of Frontier Technology, Bangladesh.
M
Md. Mehedi Hasan
Department of IoT and Robotics Engineering, University of Frontier Technology, Bangladesh.
Shakil Ahmed
Shakil Ahmed
Iowa State University
Q-AI/MLURLLCQuantum/Classical Tactile NetworkQuantum Security
S
Shifat Ara Rafiq
Department of Software Engineering, University of Frontier Technology, Bangladesh.
F
Fatema Khan
Department of Computer Science and Engineering, University of Liberal Arts Bangladesh, Dhaka