Vivid-VR: Distilling Concepts from Text-to-Video Diffusion Transformer for Photorealistic Video Restoration

πŸ“… 2025-08-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address distribution drift caused by imperfect multimodal alignment in controllable video inpainting, this paper proposes an enhanced inpainting framework built upon a text-to-video diffusion Transformer (DiT). Methodologically: (1) ControlNet is integrated for fine-grained spatiotemporal control; (2) a dual-branch connector combines MLP-based mapping and cross-attention to jointly optimize control fidelity and content consistency; (3) a concept distillation strategy leverages a pretrained T2V model to generate high-quality, text-aligned video samples, ensuring semantic fidelity and dynamic quality. Experiments demonstrate significant improvements in texture realism, visual vividness, and inter-frame continuity on both synthetic and real-world AIGC video inpainting benchmarks, outperforming state-of-the-art methods across multiple quantitative and qualitative metrics. The code and models are publicly released.

Technology Category

Application Category

πŸ“ Abstract
We present Vivid-VR, a DiT-based generative video restoration method built upon an advanced T2V foundation model, where ControlNet is leveraged to control the generation process, ensuring content consistency. However, conventional fine-tuning of such controllable pipelines frequently suffers from distribution drift due to limitations in imperfect multimodal alignment, resulting in compromised texture realism and temporal coherence. To tackle this challenge, we propose a concept distillation training strategy that utilizes the pretrained T2V model to synthesize training samples with embedded textual concepts, thereby distilling its conceptual understanding to preserve texture and temporal quality. To enhance generation controllability, we redesign the control architecture with two key components: 1) a control feature projector that filters degradation artifacts from input video latents to minimize their propagation through the generation pipeline, and 2) a new ControlNet connector employing a dual-branch design. This connector synergistically combines MLP-based feature mapping with cross-attention mechanism for dynamic control feature retrieval, enabling both content preservation and adaptive control signal modulation. Extensive experiments show that Vivid-VR performs favorably against existing approaches on both synthetic and real-world benchmarks, as well as AIGC videos, achieving impressive texture realism, visual vividness, and temporal consistency. The codes and checkpoints are publicly available at https://github.com/csbhr/Vivid-VR.
Problem

Research questions and friction points this paper is trying to address.

Addressing distribution drift in controllable video restoration pipelines
Preserving texture realism and temporal coherence in generated videos
Enhancing generation controllability through architectural redesign
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept distillation training strategy
Control feature projector filtering artifacts
Dual-branch ControlNet connector design
πŸ”Ž Similar Papers
No similar papers found.
H
Haoran Bai
Alibaba Group - Taobao & Tmall Group
Xiaoxu Chen
Xiaoxu Chen
Postdoc, HEC MontrΓ©al
Bayesian StatisticsMachine LearningTransportation Science
C
Canqian Yang
Alibaba Group - Taobao & Tmall Group
Z
Zongyao He
Alibaba Group - Taobao & Tmall Group
S
Sibin Deng
Alibaba Group - Taobao & Tmall Group
Y
Ying Chen
Alibaba Group - Taobao & Tmall Group