🤖 AI Summary
Text-to-image diffusion models exhibit systematic counting failures on quantitative instructions (e.g., “three apples”), yet existing studies lack rigorous, standardized evaluation of numerical understanding.
Method: We introduce T2ICountBench—the first benchmark dedicated to assessing counting capability—featuring a structured design, difficulty stratification, and human-AI collaborative validation. Our controlled evaluation framework integrates multiple models (open- and closed-source), fine-grained numeric prompts, and human-verified consistency checks.
Contribution/Results: Experiments reveal severe limitations across all state-of-the-art models: counting accuracy drops sharply with target quantity (peak 35% at n=2; near-zero for n≥5), and prompt engineering yields no meaningful improvement. T2ICountBench establishes a new standard for evaluating numerical reasoning in text-to-image generation, exposing fundamental gaps in current models’ quantitative comprehension.
📝 Abstract
Generative modeling is widely regarded as one of the most essential problems in today's AI community, with text-to-image generation having gained unprecedented real-world impacts. Among various approaches, diffusion models have achieved remarkable success and have become the de facto solution for text-to-image generation. However, despite their impressive performance, these models exhibit fundamental limitations in adhering to numerical constraints in user instructions, frequently generating images with an incorrect number of objects. While several prior works have mentioned this issue, a comprehensive and rigorous evaluation of this limitation remains lacking. To address this gap, we introduce T2ICountBench, a novel benchmark designed to rigorously evaluate the counting ability of state-of-the-art text-to-image diffusion models. Our benchmark encompasses a diverse set of generative models, including both open-source and private systems. It explicitly isolates counting performance from other capabilities, provides structured difficulty levels, and incorporates human evaluations to ensure high reliability. Extensive evaluations with T2ICountBench reveal that all state-of-the-art diffusion models fail to generate the correct number of objects, with accuracy dropping significantly as the number of objects increases. Additionally, an exploratory study on prompt refinement demonstrates that such simple interventions generally do not improve counting accuracy. Our findings highlight the inherent challenges in numerical understanding within diffusion models and point to promising directions for future improvements.