π€ AI Summary
To address text distortion, blurriness, and omission in multi-text generation for complex visual scenes, this paper proposes a staged decoupling and text-image strong alignment rendering framework. Methodologically, it introduces a novel progressive multi-text decoupling strategy and a token-level focus enhancement mechanism, integrated with diffusion-model-driven multi-stage rendering, cross-modal alignment constraints, localized token attention reinforcement, and controllable text layout modeling. Key contributions include: (1) the construction of CVTG-2Kβthe first dedicated benchmark for Complex Visual Text Generation (CVTG); and (2) state-of-the-art performance on CVTG-2K, achieving 23.6% and 31.4% improvements in text completeness and clarity, respectively, while significantly mitigating confusion and omission.
π Abstract
This paper explores the task of Complex Visual Text Generation (CVTG), which centers on generating intricate textual content distributed across diverse regions within visual images. In CVTG, image generation models often rendering distorted and blurred visual text or missing some visual text. To tackle these challenges, we propose TextCrafter, a novel multi-visual text rendering method. TextCrafter employs a progressive strategy to decompose complex visual text into distinct components while ensuring robust alignment between textual content and its visual carrier. Additionally, it incorporates a token focus enhancement mechanism to amplify the prominence of visual text during the generation process. TextCrafter effectively addresses key challenges in CVTG tasks, such as text confusion, omissions, and blurriness. Moreover, we present a new benchmark dataset, CVTG-2K, tailored to rigorously evaluate the performance of generative models on CVTG tasks. Extensive experiments demonstrate that our method surpasses state-of-the-art approaches.