GenState-AI: State-Aware Dataset for Text-to-Video Retrieval on AI-Generated Videos

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-video retrieval benchmarks predominantly rely on real-world videos, making it difficult to assess models’ capacity for temporal reasoning and understanding of final states. This work proposes the first AI-generated video retrieval benchmark specifically designed to evaluate sensitivity to end-state changes. It introduces a triplet dataset comprising a main video, a temporally hard negative (differing only in the final state), and a semantically hard negative (with altered semantic content), enabling explicit disentanglement of temporal versus semantic error sources. Leveraging Wan2.2-TI2V-5B, the authors generate short videos with precise control over object positions, quantities, and relational changes. Through triplet-based diagnostic analyses—including ranking statistics and transformation-type decomposition—they reveal that prevailing models frequently misrank temporally plausible but end-state-inconsistent videos, indicating insufficient sensitivity to decisive end-state evidence, while demonstrating relative robustness to semantic perturbations.

Technology Category

Application Category

📝 Abstract
Existing text-to-video retrieval benchmarks are dominated by real-world footage where much of the semantics can be inferred from a single frame, leaving temporal reasoning and explicit end-state grounding under-evaluated. We introduce GenState-AI, an AI-generated benchmark centered on controlled state transitions, where each query is paired with a main video, a temporal hard negative that differs only in the decisive end-state, and a semantic hard negative with content substitution, enabling fine-grained diagnosis of temporal vs. semantic confusions beyond appearance matching. Using Wan2.2-TI2V-5B, we generate short clips whose meaning depends on precise changes in position, quantity, and object relations, providing controllable evaluation conditions for state-aware retrieval. We evaluate two representative MLLM-based baselines, and observe consistent and interpretable failure patterns: both frequently confuse the main video with the temporal hard negative and over-prefer temporally plausible but end-state-incorrect clips, indicating insufficient grounding to decisive end-state evidence, while being comparatively less sensitive to semantic substitutions. We further introduce triplet-based diagnostic analyses, including relative-order statistics and breakdowns across transition categories, to make temporal vs. semantic failure sources explicit. GenState-AI provides a focused testbed for state-aware, temporally and semantically sensitive text-to-video retrieval, and will be released on huggingface.co.
Problem

Research questions and friction points this paper is trying to address.

text-to-video retrieval
state-awareness
temporal reasoning
end-state grounding
AI-generated videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

state-aware retrieval
temporal reasoning
hard negative mining
AI-generated video benchmark
end-state grounding
🔎 Similar Papers
No similar papers found.
M
Minghan Li
Soochow University, China
T
Tongna Chen
Soochow University, China
T
Tianrui Lv
Soochow University, China
Y
Yishuai Zhang
Soochow University, China
S
Suchao An
Soochow University, China
Guodong Zhou
Guodong Zhou
Soochow University, China
Natural Language ProcessingArtificial Intelligence