🤖 AI Summary
To address the challenge of word-level typographic control in generated images, this paper introduces WordCon—the first word-level controllable scene text generation framework. Methodologically: (1) we construct the first large-scale, word-level controllable scene text dataset; (2) we propose a Text-Image Alignment (TIA) cross-modal alignment framework that integrates grounding-based localization, masked latent-space loss, and joint attention supervision to achieve multi-word disentanglement and region-specific focus; and (3) we design a hybrid parameter-efficient fine-tuning strategy to enhance training efficiency and generalization. Experiments demonstrate that WordCon consistently outperforms state-of-the-art methods across key metrics—including typographic precision and text fidelity—while supporting diverse applications such as artistic font generation and image-conditioned text editing. WordCon delivers strong controllability, high computational efficiency, and robust transferability across domains.
📝 Abstract
Achieving precise word-level typography control within generated images remains a persistent challenge. To address it, we newly construct a word-level controlled scene text dataset and introduce the Text-Image Alignment (TIA) framework. This framework leverages cross-modal correspondence between text and local image regions provided by grounding models to enhance the Text-to-Image (T2I) model training. Furthermore, we propose WordCon, a hybrid parameter-efficient fine-tuning (PEFT) method. WordCon reparameterizes selective key parameters, improving both efficiency and portability. This allows seamless integration into diverse pipelines, including artistic text rendering, text editing, and image-conditioned text rendering. To further enhance controllability, the masked loss at the latent level is applied to guide the model to concentrate on learning the text region in the image, and the joint-attention loss provides feature-level supervision to promote disentanglement between different words. Both qualitative and quantitative results demonstrate the superiority of our method to the state of the art. The datasets and source code will be available for academic use.