🤖 AI Summary
To address the degradation of Automatic License Plate Recognition (ALPR) performance on low-quality license plate images, this paper proposes a selective super-resolution enhancement framework that applies preprocessing only to plates exhibiting poor but recoverable readability—thereby balancing recognition accuracy and computational efficiency. To support this paradigm, we introduce the first large-scale, fine-grained readability-annotated license plate dataset comprising 12,687 images, with four explicit readability levels, alongside concurrent annotations of occlusion states and character-level information—systematically advancing research in license plate readability classification. Extensive experiments using ViT, ResNet, and YOLO for readability classification yield F1-scores below 80% across all models, confirming the task’s inherent difficulty and exposing limitations of existing approaches. Our work makes three key contributions: (i) the construction of a novel benchmark dataset, (ii) a rigorous formalization of the readability classification task, and (iii) the proposal of a selective enhancement paradigm for ALPR.
📝 Abstract
Automatic License Plate Recognition (ALPR) faces a major challenge when dealing with illegible license plates (LPs). While reconstruction methods such as super-resolution (SR) have emerged, the core issue of recognizing these low-quality LPs remains unresolved. To optimize model performance and computational efficiency, image pre-processing should be applied selectively to cases that require enhanced legibility. To support research in this area, we introduce a novel dataset comprising 10,210 images of vehicles with 12,687 annotated LPs for legibility classification (the LPLC dataset). The images span a wide range of vehicle types, lighting conditions, and camera/image quality levels. We adopt a fine-grained annotation strategy that includes vehicle- and LP-level occlusions, four legibility categories (perfect, good, poor, and illegible), and character labels for three categories (excluding illegible LPs). As a benchmark, we propose a classification task using three image recognition networks to determine whether an LP image is good enough, requires super-resolution, or is completely unrecoverable. The overall F1 score, which remained below 80% for all three baseline models (ViT, ResNet, and YOLO), together with the analyses of SR and LP recognition methods, highlights the difficulty of the task and reinforces the need for further research. The proposed dataset is publicly available at https://github.com/lmlwojcik/lplc-dataset.