VideoVerse: How Far is Your T2V Generator from a World Model?

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-video (T2V) evaluation benchmarks suffer from three key limitations: insufficient discriminative power for state-of-the-art models, neglect of event-level temporal causality, and absence of systematic assessment of world knowledge understanding. To address these gaps, we introduce VideoVerse—the first comprehensive T2V benchmark explicitly designed to evaluate “world model” capabilities. Its contributions are threefold: (1) an 815-sample high-quality event prompt set spanning diverse domains—including nature, sports, indoor scenes, science fiction, and scientific experiments; (2) a ten-dimensional evaluation framework that systematically assesses event-level temporal causal reasoning and world knowledge comprehension for the first time; and (3) a human-aligned, automated QA evaluation pipeline leveraging vision-language models, with prompts authored by independent annotators and questions targeting binary judgments of dynamic/static attributes. Extensive evaluation across leading open- and closed-source T2V models reveals critical bottlenecks in temporal causal modeling and commonsense reasoning.

Technology Category

Application Category

📝 Abstract
The recent rapid advancement of Text-to-Video (T2V) generation technologies, which are critical to build ``world models'', makes the existing benchmarks increasingly insufficient to evaluate state-of-the-art T2V models. First, current evaluation dimensions, such as per-frame aesthetic quality and temporal consistency, are no longer able to differentiate state-of-the-art T2V models. Second, event-level temporal causality, which not only distinguishes video from other modalities but also constitutes a crucial component of world models, is severely underexplored in existing benchmarks. Third, existing benchmarks lack a systematic assessment of world knowledge, which are essential capabilities for building world models. To address these issues, we introduce VideoVerse, a comprehensive benchmark that focuses on evaluating whether a T2V model could understand complex temporal causality and world knowledge in the real world. We collect representative videos across diverse domains (e.g., natural landscapes, sports, indoor scenes, science fiction, chemical and physical experiments) and extract their event-level descriptions with inherent temporal causality, which are then rewritten into text-to-video prompts by independent annotators. For each prompt, we design a suite of binary evaluation questions from the perspective of dynamic and static properties, with a total of ten carefully defined evaluation dimensions. In total, our VideoVerse comprises 300 carefully curated prompts, involving 815 events and 793 binary evaluation questions. Consequently, a human preference aligned QA-based evaluation pipeline is developed by using modern vision-language models. Finally, we perform a systematic evaluation of state-of-the-art open-source and closed-source T2V models on VideoVerse, providing in-depth analysis on how far the current T2V generators are from world models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating T2V models' temporal causality understanding
Assessing world knowledge integration in video generation
Benchmarking event-level dynamic and static properties
Innovation

Methods, ideas, or system contributions that make the work stand out.

VideoVerse benchmark evaluates temporal causality understanding
Assesses world knowledge in text-to-video generation models
Uses binary questions across ten defined evaluation dimensions
🔎 Similar Papers
No similar papers found.
Z
Zeqing Wang
Sun Yat-Sen University, OPPO Research Institute
Xinyu Wei
Xinyu Wei
PolyU & PKU
Computer VisionDeep Learning
B
Bairui Li
The Hong Kong Polytechnic University, OPPO Research Institute
Z
Zhen Guo
The Hong Kong Polytechnic University, OPPO Research Institute
J
Jinrui Zhang
The Hong Kong Polytechnic University, OPPO Research Institute
H
Hongyang Wei
Tsinghua University, OPPO Research Institute
K
Keze Wang
Sun Yat-Sen University
L
Lei Zhang
The Hong Kong Polytechnic University, OPPO Research Institute