VidCLearn: A Continual Learning Approach for Text-to-Video Generation

๐Ÿ“… 2025-09-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Text-to-video diffusion models struggle with continual learning of new knowledge, typically requiring full retraining from scratch. Method: This paper proposes an incremental learning framework for diffusion-based text-to-video generation, built upon a student-teacher architecture with generative replay. It integrates knowledge distillation, temporal consistency loss, and a retrieval-augmented module to enhance motion coherence and text-video structural alignment. Contribution/Results: Key innovations include efficient incorporation of novel text-video pairs without full model retraining, and retrieval-guided replay coupled with dynamic temporal constraints to jointly preserve semantic fidelity and temporal modeling. Experiments on multiple benchmarks demonstrate superior performance over state-of-the-art baselines in visual quality, semantic alignment, and temporal coherence, while maintaining high inference efficiency and strong incremental adaptability.

Technology Category

Application Category

๐Ÿ“ Abstract
Text-to-video generation is an emerging field in generative AI, enabling the creation of realistic, semantically accurate videos from text prompts. While current models achieve impressive visual quality and alignment with input text, they typically rely on static knowledge, making it difficult to incorporate new data without retraining from scratch. To address this limitation, we propose VidCLearn, a continual learning framework for diffusion-based text-to-video generation. VidCLearn features a student-teacher architecture where the student model is incrementally updated with new text-video pairs, and the teacher model helps preserve previously learned knowledge through generative replay. Additionally, we introduce a novel temporal consistency loss to enhance motion smoothness and a video retrieval module to provide structural guidance at inference. Our architecture is also designed to be more computationally efficient than existing models while retaining satisfactory generation performance. Experimental results show VidCLearn's superiority over baseline methods in terms of visual quality, semantic alignment, and temporal coherence.
Problem

Research questions and friction points this paper is trying to address.

Enabling text-to-video models to learn new data without full retraining
Preserving previously learned knowledge while incorporating new information
Enhancing motion smoothness and temporal coherence in generated videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Student-teacher architecture with generative replay
Novel temporal consistency loss for motion
Video retrieval module for structural guidance
๐Ÿ”Ž Similar Papers
No similar papers found.