T2VWorldBench: A Benchmark for Evaluating World Knowledge in Text-to-Video Generation

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-video (T2V) models lack systematic benchmarks to evaluate their world knowledge capabilities—particularly regarding semantic consistency and factual accuracy. Method: We introduce T2VWorldBench, the first world-knowledge-oriented T2V evaluation framework, covering six broad domains (e.g., physics, nature, culture) and 60 fine-grained categories, with 1,200 diverse prompts. It integrates human evaluation with automated assessment via vision-language models (VLMs), and constructs a multi-dimensional, hierarchical test suite assessing causal reasoning, attribute consistency, natural laws, and more. Contribution/Results: Evaluating 10 state-of-the-art open-source and commercial T2V models reveals pervasive weaknesses in commonsense reasoning and frequent factual errors. T2VWorldBench establishes a standardized benchmark for world knowledge assessment in T2V generation and provides concrete diagnostic insights and actionable directions for model improvement.

Technology Category

Application Category

📝 Abstract
Text-to-video (T2V) models have shown remarkable performance in generating visually reasonable scenes, while their capability to leverage world knowledge for ensuring semantic consistency and factual accuracy remains largely understudied. In response to this challenge, we propose T2VWorldBench, the first systematic evaluation framework for evaluating the world knowledge generation abilities of text-to-video models, covering 6 major categories, 60 subcategories, and 1,200 prompts across a wide range of domains, including physics, nature, activity, culture, causality, and object. To address both human preference and scalable evaluation, our benchmark incorporates both human evaluation and automated evaluation using vision-language models (VLMs). We evaluated the 10 most advanced text-to-video models currently available, ranging from open source to commercial models, and found that most models are unable to understand world knowledge and generate truly correct videos. These findings point out a critical gap in the capability of current text-to-video models to leverage world knowledge, providing valuable research opportunities and entry points for constructing models with robust capabilities for commonsense reasoning and factual generation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating world knowledge in text-to-video generation models
Assessing semantic consistency and factual accuracy in T2V outputs
Identifying gaps in commonsense reasoning for video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

First systematic T2V world knowledge evaluation framework
Combines human and VLM-based automated evaluation
Assesses 10 advanced models across diverse domains
🔎 Similar Papers