🤖 AI Summary
This work addresses the absence of text rendering quality assessment methods aligned with human perception in current text-to-image generation models, as mainstream OCR and vision-language models struggle to accurately capture visual artifacts. The paper introduces the Textual Image Quality Assessment (TIQA) task, which quantifies the fidelity of rendered text in generated images by predicting a scalar score aligned with human mean opinion scores (MOS). To support this task, two MOS-annotated datasets are constructed, and a lightweight, no-reference evaluation model, ANTIQa, is proposed. By incorporating text-specific biases, ANTIQa improves PLCC correlation by at least 0.05 on TIQA-Crops and 0.08 on TIQA-Images. When applied to rerank generated outputs, it increases average human-rated quality by 14%.
📝 Abstract
Text rendering remains a persistent failure mode of modern text-to-image models (T2I), yet existing evaluations rely on OCR correctness or VLM-based judging procedures that are poorly aligned with perceptual text artifacts. We introduce Text-in-Image Quality Assessment (TIQA), a task that predicts a scalar quality score that matches human judgments of rendered-text fidelity within cropped text regions. We release two MOS-labeled datasets: TIQA-Crops (10k text crops) and TIQA-Images (1,500 images), spanning 20+ T2I models, including proprietary ones. We also propose ANTIQA, a lightweight method with text-specific biases, and show that it improves correlation with human scores over OCR confidence, VLM judges, and generic NR-IQA metrics by at least $\sim0.05$ on TIQA-Crops and $\sim0.08$ on TIQA-Images, as measured by PLCC. Finally, we show that TIQA models are valuable in downstream tasks: for example, selecting the best-of-5 generations with ANTIQA improves human-rated text quality by $+14\%$ on average, demonstrating practical value for filtering and reranking in generation pipelines.