OmniV2V: Versatile Video Generation and Editing via Dynamic Content Manipulation

šŸ“… 2025-06-02
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Existing video generation models are largely confined to single-scene settings and lack cross-scene generalization and fine-grained dynamic content control. To address this, we propose OmniV2V—a unified framework featuring a novel dynamic content manipulation injection module and an LLaVA-based vision-language instruction understanding module. OmniV2V establishes the first comprehensive multi-task benchmark covering eight distinct video editing and synthesis tasks: object motion, insertion, mask-guided editing, virtual try-on, inpainting, outpainting, human animation, and controllable character synthesis. Our method integrates Diffusion Transformers (DiT), vision-language alignment, and multi-task data augmentation. Extensive experiments demonstrate that OmniV2V matches or surpasses state-of-the-art open-source and commercial models across diverse video generation and editing benchmarks, achieving significant improvements in cross-scene generalization and pixel-level manipulation accuracy.

Technology Category

Application Category

šŸ“ Abstract
The emergence of Diffusion Transformers (DiT) has brought significant advancements to video generation, especially in text-to-video and image-to-video tasks. Although video generation is widely applied in various fields, most existing models are limited to single scenarios and cannot perform diverse video generation and editing through dynamic content manipulation. We propose OmniV2V, a video model capable of generating and editing videos across different scenarios based on various operations, including: object movement, object addition, mask-guided video edit, try-on, inpainting, outpainting, human animation, and controllable character video synthesis. We explore a unified dynamic content manipulation injection module, which effectively integrates the requirements of the above tasks. In addition, we design a visual-text instruction module based on LLaVA, enabling the model to effectively understand the correspondence between visual content and instructions. Furthermore, we build a comprehensive multi-task data processing system. Since there is data overlap among various tasks, this system can efficiently provide data augmentation. Using this system, we construct a multi-type, multi-scenario OmniV2V dataset and its corresponding OmniV2V-Test benchmark. Extensive experiments show that OmniV2V works as well as, and sometimes better than, the best existing open-source and commercial models for many video generation and editing tasks.
Problem

Research questions and friction points this paper is trying to address.

Enables diverse video generation and editing across scenarios
Integrates dynamic content manipulation for multiple tasks
Improves understanding of visual-text instructions for video synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified dynamic content manipulation injection module
Visual-text instruction module based on LLaVA
Comprehensive multi-task data processing system
šŸ”Ž Similar Papers
No similar papers found.
Sen Liang
Sen Liang
University of science and technology of China
video generationvideo class-incremental learning
Zhentao Yu
Zhentao Yu
Researcher, Tencent Hunyuan
Computer vision
Z
Zhengguang Zhou
Tencent Hunyuan
T
Teng Hu
Tencent Hunyuan
H
Hongmei Wang
Tencent Hunyuan
Y
Yi Chen
Tencent Hunyuan
Q
Qin Lin
Tencent Hunyuan
Y
Yuan Zhou
Tencent Hunyuan
X
Xin Li
University of Science and Technology of China
Q
Qinglin Lu
Tencent Hunyuan
Z
Zhibo Chen
University of Science and Technology of China