STEFANN: Scene Text Editor Using Font Adaptive Neural Network

📅 2019-03-04
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 58
Influential: 7
📄 PDF
🤖 AI Summary
This paper addresses character-level editing of scene text in images, proposing the first end-to-end character-level editing framework supporting error correction, text inpainting, and image reuse. The method adopts a two-stage pipeline: first, generating target characters that preserve structural layout, font identity, and color fidelity from source characters; second, achieving seamless substitution via geometric alignment and contextual fusion to ensure consistency in structure, typographic style, and spatial layout. Key innovations include the Font-Adaptive Network (FANet) and the Color-Preserving Network (ColorNet). Extensive experiments on COCO-Text and ICDAR demonstrate significant improvements over state-of-the-art baselines in PSNR and SSIM. Qualitative evaluations further confirm the natural appearance and high legibility of edited text.
📝 Abstract
Textual information in a captured scene plays an important role in scene interpretation and decision making. Though there exist methods that can successfully detect and interpret complex text regions present in a scene, to the best of our knowledge, there is no significant prior work that aims to modify the textual information in an image. The ability to edit text directly on images has several advantages including error correction, text restoration and image reusability. In this paper, we propose a method to modify text in an image at character-level. We approach the problem in two stages. At first, the unobserved character (target) is generated from an observed character (source) being modified. We propose two different neural network architectures - (a) FANnet to achieve structural consistency with source font and (b) Colornet to preserve source color. Next, we replace the source character with the generated character maintaining both geometric and visual consistency with neighboring characters. Our method works as a unified platform for modifying text in images. We present the effectiveness of our method on COCO-Text and ICDAR datasets both qualitatively and quantitatively.
Problem

Research questions and friction points this paper is trying to address.

Editing text in images
Character-level text modification
Maintaining font and color consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Font Adaptive Neural Network
Character-level text modification
Geometric and visual consistency
🔎 Similar Papers
No similar papers found.