SANEval: Open-Vocabulary Compositional Benchmarks with Failure-mode Diagnosis

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current text-to-image (T2I) models struggle with complex prompts involving multiple objects, attributes, and spatial relationships, and lack open-vocabulary, fine-grained, and interpretable evaluation metrics. To address this, this work proposes SANEvalβ€”the first open-vocabulary compositional generation evaluation framework capable of diagnosing failure modes. SANEval leverages large language models (LLMs) to deeply parse prompt semantics and combines an LLM-enhanced open-vocabulary object detector to automatically assess the content of generated images. Experiments across six mainstream T2I models demonstrate that SANEval achieves significantly higher Spearman rank correlation with human evaluations than existing benchmarks, particularly excelling in critical tasks such as attribute binding, spatial reasoning, and numerical expression.

Technology Category

Application Category

πŸ“ Abstract
The rapid progress of text-to-image (T2I) models has unlocked unprecedented creative potential, yet their ability to faithfully render complex prompts involving multiple objects, attributes, and spatial relationships remains a significant bottleneck. Progress is hampered by a lack of adequate evaluation methods; current benchmarks are often restricted to closed-set vocabularies, lack fine-grained diagnostic capabilities, and fail to provide the interpretable feedback necessary to diagnose and remedy specific compositional failures. We solve these challenges by introducing SANEval (Spatial, Attribute, and Numeracy Evaluation), a comprehensive benchmark that establishes a scalable new pipeline for open-vocabulary compositional evaluation. SANEval combines a large language model (LLM) for deep prompt understanding with an LLM-enhanced, open-vocabulary object detector to robustly evaluate compositional adherence, unconstrained by a fixed vocabulary. Through extensive experiments on six state-of-the-art T2I models, we demonstrate that SANEval's automated evaluations provide a more faithful proxy for human assessment; our metric achieves a Spearman's rank correlation with statistically different results than those of existing benchmarks across tasks of attribute binding, spatial relations, and numeracy. To facilitate future research in compositional T2I generation and evaluation, we will release the SANEval dataset and our open-source evaluation pipeline.
Problem

Research questions and friction points this paper is trying to address.

text-to-image generation
compositional evaluation
open-vocabulary
failure-mode diagnosis
benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

open-vocabulary evaluation
compositional generation
failure-mode diagnosis
LLM-enhanced object detection
text-to-image benchmark
πŸ”Ž Similar Papers
No similar papers found.