UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current unified multimodal models lack an integrated evaluation framework that operates without auxiliary models or annotated images, comprehensively covers both understanding and generation, and sufficiently addresses benchmark diversity and instruction-following capability assessment. To address these gaps, we propose UniEval—the first zero-shot, fully instruction-driven unified evaluation framework. Our approach comprises three key contributions: (1) UniBench, a high-challenge benchmark comprising 81 fine-grained task categories; (2) a holistic evaluation paradigm enabling joint cross-modal task assessment; and (3) UniScore, a transferable scoring function explicitly modeled to achieve significantly higher correlation with human judgments than state-of-the-art metrics (ρ > 0.85). Extensive experiments demonstrate that UniEval precisely characterizes emergent capabilities of unified models—including instruction adherence, cross-task generalization, and multimodal synergy—thereby establishing new performance frontiers.

Technology Category

Application Category

📝 Abstract
The emergence of unified multimodal understanding and generation models is rapidly attracting attention because of their ability to enhance instruction-following capabilities while minimizing model redundancy. However, there is a lack of a unified evaluation framework for these models, which would enable an elegant, simplified, and overall evaluation. Current models conduct evaluations on multiple task-specific benchmarks, but there are significant limitations, such as the lack of overall results, errors from extra evaluation models, reliance on extensive labeled images, benchmarks that lack diversity, and metrics with limited capacity for instruction-following evaluation. To tackle these challenges, we introduce UniEval, the first evaluation framework designed for unified multimodal models without extra models, images, or annotations. This facilitates a simplified and unified evaluation process. The UniEval framework contains a holistic benchmark, UniBench (supports both unified and visual generation models), along with the corresponding UniScore metric. UniBench includes 81 fine-grained tags contributing to high diversity. Experimental results indicate that UniBench is more challenging than existing benchmarks, and UniScore aligns closely with human evaluations, surpassing current metrics. Moreover, we extensively evaluated SoTA unified and visual generation models, uncovering new insights into Univeral's unique values.
Problem

Research questions and friction points this paper is trying to address.

Lack of unified evaluation for multimodal models
Current benchmarks lack diversity and overall results
Need for model-free evaluation with human-aligned metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified evaluation framework for multimodal models
No extra models, images, or annotations needed
Holistic benchmark with diverse fine-grained tags
🔎 Similar Papers
No similar papers found.
Y
Yi Li
The Hong Kong University of Science and Technology
H
Haonan Wang
The Hong Kong University of Science and Technology
Qixiang Zhang
Qixiang Zhang
PhD Candidate, The Hong Kong University of Science and Technology
AI for Neural ScienceDeep LearningMedical Image Analysis
Boyu Xiao
Boyu Xiao
Harbin Institute of Technology
C
Chenchang Hu
The Hong Kong University of Science and Technology
H
Hualiang Wang
The Hong Kong University of Science and Technology
Xiaomeng Li
Xiaomeng Li
Assistant Professor, The Hong Kong University of Science and Technology
Medical Image AnalysisAI in HealthcareDeep Learning