Leveraging Transfer Learning and Mobile-enabled Convolutional Neural Networks for Improved Arabic Handwritten Character Recognition

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the dual challenges of high computational cost and scarce labeled data in Arabic handwritten character recognition, this paper proposes an efficient recognition framework integrating transfer learning with mobile-optimized lightweight CNNs. We systematically evaluate MobileNet, SqueezeNet, MnasNet, and ShuffleNet under three training paradigms—full fine-tuning, partial fine-tuning, and from-scratch training—across three benchmark datasets: AHCD, HIJJA, and IFHCDB. Results demonstrate that full fine-tuning achieves the best trade-off between accuracy and convergence speed. MnasNet attains 99% accuracy on IFHCDB (97% on AHCD; 92% on HIJJA), while MobileNet exhibits superior generalization across datasets. To the best of our knowledge, this is the first systematic study validating the synergistic efficacy of lightweight CNNs and transfer learning for low-resource Arabic handwriting recognition. The work establishes a reproducible, edge-deployable methodology, advancing practical deployment of Arabic OCR on resource-constrained devices.

Technology Category

Application Category

📝 Abstract
The study explores the integration of transfer learning (TL) with mobile-enabled convolutional neural networks (MbNets) to enhance Arabic Handwritten Character Recognition (AHCR). Addressing challenges like extensive computational requirements and dataset scarcity, this research evaluates three TL strategies--full fine-tuning, partial fine-tuning, and training from scratch--using four lightweight MbNets: MobileNet, SqueezeNet, MnasNet, and ShuffleNet. Experiments were conducted on three benchmark datasets: AHCD, HIJJA, and IFHCDB. MobileNet emerged as the top-performing model, consistently achieving superior accuracy, robustness, and efficiency, with ShuffleNet excelling in generalization, particularly under full fine-tuning. The IFHCDB dataset yielded the highest results, with 99% accuracy using MnasNet under full fine-tuning, highlighting its suitability for robust character recognition. The AHCD dataset achieved competitive accuracy (97%) with ShuffleNet, while HIJJA posed significant challenges due to its variability, achieving a peak accuracy of 92% with ShuffleNet. Notably, full fine-tuning demonstrated the best overall performance, balancing accuracy and convergence speed, while partial fine-tuning underperformed across metrics. These findings underscore the potential of combining TL and MbNets for resource-efficient AHCR, paving the way for further optimizations and broader applications. Future work will explore architectural modifications, in-depth dataset feature analysis, data augmentation, and advanced sensitivity analysis to enhance model robustness and generalizability.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Arabic handwritten character recognition accuracy
Addressing computational demands and dataset scarcity issues
Evaluating transfer learning with mobile-optimized neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfer learning with mobile CNNs for Arabic recognition
Evaluated full and partial fine-tuning strategies
MobileNet achieved top accuracy and efficiency
🔎 Similar Papers
M
Mohsine El Khayati
Systems Theory and Informatics Laboratory, University Moulay Ismail, Meknes, Morocco
A
Ayyad Maafiri
LMC, Polydisciplinary Faculty of Safi, Cadi Ayyad University, Morocco
Y
Yassine Himeur
College of Engineering and Information Technology, University of Dubai, Dubai, UAE
H
Hamzah Ali Alkhazaleh
College of Engineering and Information Technology, University of Dubai, Dubai, UAE
Shadi Atalla
Shadi Atalla
Associate Professor in Computing & Information Systems, University of Dubai
DATA SCIENCE
W
Wathiq Mansoor
College of Engineering and Information Technology, University of Dubai, Dubai, UAE