🤖 AI Summary
To address the excessive computational and memory overhead of visual Transformer encoders in scene text recognition (STR), this paper proposes a cascaded visual Transformer architecture that dynamically shortens the visual token sequence via progressive token downsampling, thereby significantly reducing redundant computation while preserving long-range contextual modeling capability. The method is seamlessly integrated into end-to-end ViT-decoder STR frameworks without requiring additional post-processing or pretraining adjustments. Evaluated on standard benchmarks, it achieves state-of-the-art accuracy (92.68% vs. baseline 92.77%) while reducing overall computational complexity by 48% and accelerating inference by 1.9×. Its core innovation lies in the first introduction of a cascaded encoder structure coupled with a learnable token compression mechanism for STR, effectively balancing efficiency and representational capacity. This yields a practical, high-accuracy, low-overhead solution suitable for resource-constrained deployment scenarios.
📝 Abstract
In recent years, vision transformers with text decoder have demonstrated remarkable performance on Scene Text Recognition (STR) due to their ability to capture long-range dependencies and contextual relationships with high learning capacity. However, the computational and memory demands of these models are significant, limiting their deployment in resource-constrained applications. To address this challenge, we propose an efficient and accurate STR system. Specifically, we focus on improving the efficiency of encoder models by introducing a cascaded-transformers structure. This structure progressively reduces the vision token size during the encoding step, effectively eliminating redundant tokens and reducing computational cost. Our experimental results confirm that our STR system achieves comparable performance to state-of-the-art baselines while substantially decreasing computational requirements. In particular, for large-models, the accuracy remains same, 92.77 to 92.68, while computational complexity is almost halved with our structure.