🤖 AI Summary
AI-generated faces (AIGFs) suffer from artifacts, identity drift, and text–image inconsistency; existing image quality assessment (IQA) metrics fail to capture human fine-grained preferences accurately. Method: We introduce FaceQ—the first large-scale, human-annotated face generation quality benchmark (12,255 images, 32,742 MOS scores), covering generation, customization, and restoration tasks—and systematically model human multidimensional preferences across realism, identity fidelity, and text–image alignment. We further propose F-Bench, a statistically rigorous, significance-driven evaluation framework for metric benchmarking. Results: Experiments reveal that mainstream IQA, face-specific QA (FQA), and AIGC-oriented IQA metrics exhibit consistently low correlation (<0.3) with human judgments. FaceQ is publicly released, and F-Bench establishes the first end-to-end, human-preference-aligned benchmark for face generation evaluation, enabling paradigmatic advances in generative quality assessment.
📝 Abstract
Artificial intelligence generative models exhibit remarkable capabilities in content creation, particularly in face image generation, customization, and restoration. However, current AI-generated faces (AIGFs) often fall short of human preferences due to unique distortions, unrealistic details, and unexpected identity shifts, underscoring the need for a comprehensive quality evaluation framework for AIGFs. To address this need, we introduce FaceQ, a large-scale, comprehensive database of AI-generated Face images with fine-grained Quality annotations reflecting human preferences. The FaceQ database comprises 12,255 images generated by 29 models across three tasks: (1) face generation, (2) face customization, and (3) face restoration. It includes 32,742 mean opinion scores (MOSs) from 180 annotators, assessed across multiple dimensions: quality, authenticity, identity (ID) fidelity, and text-image correspondence. Using the FaceQ database, we establish F-Bench, a benchmark for comparing and evaluating face generation, customization, and restoration models, highlighting strengths and weaknesses across various prompts and evaluation dimensions. Additionally, we assess the performance of existing image quality assessment (IQA), face quality assessment (FQA), AI-generated content image quality assessment (AIGCIQA), and preference evaluation metrics, manifesting that these standard metrics are relatively ineffective in evaluating authenticity, ID fidelity, and text-image correspondence. The FaceQ database will be publicly available upon publication.