T2AV-Compass: Towards Unified Evaluation for Text-to-Audio-Video Generation

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current text-to-audiovisual (T2AV) generation lacks a unified, comprehensive evaluation benchmark—particularly for cross-modal alignment, instruction following, and perceptual realism under complex prompts—resulting in severe fragmentation. To address this, we introduce T2AV-Compass: the first diagnostic, unified evaluation benchmark specifically designed for T2AV tasks. It comprises 500 semantically rich, physically plausible complex prompts; a novel two-tiered evaluation framework integrating signal-level objective metrics (video/audio quality, audiovisual synchronization, cross-modal similarity) with an MLLM-as-a-Judge subjective protocol (instruction adherence, perceptual realism); and a taxonomy-driven prompt construction methodology. Systematic evaluation across 11 state-of-the-art models reveals substantial performance gaps versus human baselines—especially in audio realism, fine-grained audiovisual synchronization, and complex instruction execution—demonstrating the benchmark’s high difficulty and diagnostic utility.

Technology Category

Application Category

📝 Abstract
Text-to-Audio-Video (T2AV) generation aims to synthesize temporally coherent video and semantically synchronized audio from natural language, yet its evaluation remains fragmented, often relying on unimodal metrics or narrowly scoped benchmarks that fail to capture cross-modal alignment, instruction following, and perceptual realism under complex prompts. To address this limitation, we present T2AV-Compass, a unified benchmark for comprehensive evaluation of T2AV systems, consisting of 500 diverse and complex prompts constructed via a taxonomy-driven pipeline to ensure semantic richness and physical plausibility. Besides, T2AV-Compass introduces a dual-level evaluation framework that integrates objective signal-level metrics for video quality, audio quality, and cross-modal alignment with a subjective MLLM-as-a-Judge protocol for instruction following and realism assessment. Extensive evaluation of 11 representative T2AVsystems reveals that even the strongest models fall substantially short of human-level realism and cross-modal consistency, with persistent failures in audio realism, fine-grained synchronization, instruction following, etc. These results indicate significant improvement room for future models and highlight the value of T2AV-Compass as a challenging and diagnostic testbed for advancing text-to-audio-video generation.
Problem

Research questions and friction points this paper is trying to address.

Unified evaluation for text-to-audio-video generation lacking
Benchmark assesses cross-modal alignment and perceptual realism
Existing models fall short in audio realism and synchronization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified benchmark with 500 diverse prompts
Dual-level evaluation integrating objective and subjective metrics
MLLM-as-a-Judge protocol for instruction following assessment
🔎 Similar Papers
No similar papers found.
Z
Zhe Cao
NJU-LINK Team, Nanjing University
T
Tao Wang
NJU-LINK Team, Nanjing University
Jiaming Wang
Jiaming Wang
National University of Singapore
Generative AIRobotics
Y
Yanghai Wang
NJU-LINK Team, Nanjing University
Yuanxing Zhang
Yuanxing Zhang
Kuaishou Technology
Recommender SystemLarge Language ModelVideo Understanding
J
Jialu Chen
Kling Team, Kuaishou Technology
M
Miao Deng
NJU-LINK Team, Nanjing University
J
Jiahao Wang
NJU-LINK Team, Nanjing University
Y
Yubin Guo
NJU-LINK Team, Nanjing University
C
Chenxi Liao
NJU-LINK Team, Nanjing University
Y
Yize Zhang
NJU-LINK Team, Nanjing University
Zhaoxiang Zhang
Zhaoxiang Zhang
Institute of Automation, Chinese Academy of Sciences
Computer VisionPattern RecognitionBiologically-inspired Learning
J
Jiaheng Liu
NJU-LINK Team, Nanjing University