Text-to-Image Diffusion Models Cannot Count, and Prompt Refinement Cannot Help

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image diffusion models exhibit systematic counting failures on quantitative instructions (e.g., “three apples”), yet existing studies lack rigorous, standardized evaluation of numerical understanding. Method: We introduce T2ICountBench—the first benchmark dedicated to assessing counting capability—featuring a structured design, difficulty stratification, and human-AI collaborative validation. Our controlled evaluation framework integrates multiple models (open- and closed-source), fine-grained numeric prompts, and human-verified consistency checks. Contribution/Results: Experiments reveal severe limitations across all state-of-the-art models: counting accuracy drops sharply with target quantity (peak 35% at n=2; near-zero for n≥5), and prompt engineering yields no meaningful improvement. T2ICountBench establishes a new standard for evaluating numerical reasoning in text-to-image generation, exposing fundamental gaps in current models’ quantitative comprehension.

Technology Category

Application Category

📝 Abstract
Generative modeling is widely regarded as one of the most essential problems in today's AI community, with text-to-image generation having gained unprecedented real-world impacts. Among various approaches, diffusion models have achieved remarkable success and have become the de facto solution for text-to-image generation. However, despite their impressive performance, these models exhibit fundamental limitations in adhering to numerical constraints in user instructions, frequently generating images with an incorrect number of objects. While several prior works have mentioned this issue, a comprehensive and rigorous evaluation of this limitation remains lacking. To address this gap, we introduce T2ICountBench, a novel benchmark designed to rigorously evaluate the counting ability of state-of-the-art text-to-image diffusion models. Our benchmark encompasses a diverse set of generative models, including both open-source and private systems. It explicitly isolates counting performance from other capabilities, provides structured difficulty levels, and incorporates human evaluations to ensure high reliability. Extensive evaluations with T2ICountBench reveal that all state-of-the-art diffusion models fail to generate the correct number of objects, with accuracy dropping significantly as the number of objects increases. Additionally, an exploratory study on prompt refinement demonstrates that such simple interventions generally do not improve counting accuracy. Our findings highlight the inherent challenges in numerical understanding within diffusion models and point to promising directions for future improvements.
Problem

Research questions and friction points this paper is trying to address.

Text-to-image diffusion models fail to count objects accurately.
Existing models struggle with numerical constraints in user prompts.
Prompt refinement does not improve counting accuracy in diffusion models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces T2ICountBench for counting evaluation
Evaluates state-of-the-art text-to-image diffusion models
Explores prompt refinement impact on counting accuracy
🔎 Similar Papers
No similar papers found.
Y
Yuefan Cao
Zhejiang University
Xuyang Guo
Xuyang Guo
Guilin University of Electronic Technology
Machine Learning
J
Jiayan Huo
University of Arizona
Yingyu Liang
Yingyu Liang
The University of Hong Kong
machine learning
Zhenmei Shi
Zhenmei Shi
Senior Research Scientist at MongoDB + Voyage AI; PhD from University of Wisconsin–Madison
Deep LearningMachine LearningArtificial Intelligence
Z
Zhao Song
The Simons Institute for the Theory of Computing at the UC, Berkeley
J
Jiahao Zhang
Independent Researcher
Z
Zhuang Zhen
University of Minnesota