F-Bench: Rethinking Human Preference Evaluation Metrics for Benchmarking Face Generation, Customization, and Restoration

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
AI-generated faces (AIGFs) suffer from artifacts, identity drift, and text–image inconsistency; existing image quality assessment (IQA) metrics fail to capture human fine-grained preferences accurately. Method: We introduce FaceQ—the first large-scale, human-annotated face generation quality benchmark (12,255 images, 32,742 MOS scores), covering generation, customization, and restoration tasks—and systematically model human multidimensional preferences across realism, identity fidelity, and text–image alignment. We further propose F-Bench, a statistically rigorous, significance-driven evaluation framework for metric benchmarking. Results: Experiments reveal that mainstream IQA, face-specific QA (FQA), and AIGC-oriented IQA metrics exhibit consistently low correlation (<0.3) with human judgments. FaceQ is publicly released, and F-Bench establishes the first end-to-end, human-preference-aligned benchmark for face generation evaluation, enabling paradigmatic advances in generative quality assessment.

Technology Category

Application Category

📝 Abstract
Artificial intelligence generative models exhibit remarkable capabilities in content creation, particularly in face image generation, customization, and restoration. However, current AI-generated faces (AIGFs) often fall short of human preferences due to unique distortions, unrealistic details, and unexpected identity shifts, underscoring the need for a comprehensive quality evaluation framework for AIGFs. To address this need, we introduce FaceQ, a large-scale, comprehensive database of AI-generated Face images with fine-grained Quality annotations reflecting human preferences. The FaceQ database comprises 12,255 images generated by 29 models across three tasks: (1) face generation, (2) face customization, and (3) face restoration. It includes 32,742 mean opinion scores (MOSs) from 180 annotators, assessed across multiple dimensions: quality, authenticity, identity (ID) fidelity, and text-image correspondence. Using the FaceQ database, we establish F-Bench, a benchmark for comparing and evaluating face generation, customization, and restoration models, highlighting strengths and weaknesses across various prompts and evaluation dimensions. Additionally, we assess the performance of existing image quality assessment (IQA), face quality assessment (FQA), AI-generated content image quality assessment (AIGCIQA), and preference evaluation metrics, manifesting that these standard metrics are relatively ineffective in evaluating authenticity, ID fidelity, and text-image correspondence. The FaceQ database will be publicly available upon publication.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI-generated face quality against human preferences
Assessing authenticity, identity fidelity and text-image correspondence
Developing comprehensive benchmark for face generation and restoration models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale FaceQ database with quality annotations
F-Bench benchmark for model evaluation
Multi-dimensional human preference assessment metrics
🔎 Similar Papers
No similar papers found.
L
Lu Liu
Shanghai Jiao Tong University, Shanghai, China
Huiyu Duan
Huiyu Duan
Shanghai Jiao Tong University
Multimedia Signal Processing
Q
Qiang Hu
Shanghai Jiao Tong University, Shanghai, China
L
Liu Yang
Shanghai Jiao Tong University, Shanghai, China
Chunlei Cai
Chunlei Cai
Bilibili Inc.
Video compressionImage compressionImage processingDeep learning
Tianxiao Ye
Tianxiao Ye
Bilibili Inc., China
H
Huayu Liu
Shanghai Jiao Tong University, Shanghai, China
X
Xiaoyun Zhang
Shanghai Jiao Tong University, Shanghai, China
Guangtao Zhai
Guangtao Zhai
Professor, IEEE Fellow, Shanghai Jiao Tong University
Multimedia Signal ProcessingVisual Quality AssessmentQoEAI EvaluationDisplays