🤖 AI Summary
Historical Arabic handwritten text recognition (HTR) faces challenges including script variability, intricate ligature formation, visual degradation, and difficulty in recognizing diacritical marks, compounded by scarce annotated data. This paper introduces the first end-to-end Transformer encoder-decoder framework tailored for low-resource Arabic HTR. It innovatively integrates Vision Transformer (ViT)-based image encoding, a compact Arabic subword tokenization strategy, and a progressive fine-tuning training pipeline. Notably, it represents the first successful adaptation of state-of-the-art English HTR models to the highly complex Arabic script domain. Evaluated on the largest publicly available historical Arabic handwritten dataset, our method achieves a character error rate (CER) of 8.6%, outperforming the best prior baseline by 51%. On a private non-historical Arabic handwriting dataset, CER further drops to 4.2%, demonstrating substantially improved generalization and practical applicability.
📝 Abstract
Arabic handwritten text recognition (HTR) is challenging, especially for historical texts, due to diverse writing styles and the intrinsic features of Arabic script. Additionally, Arabic handwriting datasets are smaller compared to English ones, making it difficult to train generalizable Arabic HTR models. To address these challenges, we propose HATFormer, a transformer-based encoder-decoder architecture that builds on a state-of-the-art English HTR model. By leveraging the transformer's attention mechanism, HATFormer captures spatial contextual information to address the intrinsic challenges of Arabic script through differentiating cursive characters, decomposing visual representations, and identifying diacritics. Our customization to historical handwritten Arabic includes an image processor for effective ViT information preprocessing, a text tokenizer for compact Arabic text representation, and a training pipeline that accounts for a limited amount of historic Arabic handwriting data. HATFormer achieves a character error rate (CER) of 8.6% on the largest public historical handwritten Arabic dataset, with a 51% improvement over the best baseline in the literature. HATFormer also attains a comparable CER of 4.2% on the largest private non-historical dataset. Our work demonstrates the feasibility of adapting an English HTR method to a low-resource language with complex, language-specific challenges, contributing to advancements in document digitization, information retrieval, and cultural preservation.