๐ค AI Summary
Text-to-video diffusion models struggle with continual learning of new knowledge, typically requiring full retraining from scratch. Method: This paper proposes an incremental learning framework for diffusion-based text-to-video generation, built upon a student-teacher architecture with generative replay. It integrates knowledge distillation, temporal consistency loss, and a retrieval-augmented module to enhance motion coherence and text-video structural alignment. Contribution/Results: Key innovations include efficient incorporation of novel text-video pairs without full model retraining, and retrieval-guided replay coupled with dynamic temporal constraints to jointly preserve semantic fidelity and temporal modeling. Experiments on multiple benchmarks demonstrate superior performance over state-of-the-art baselines in visual quality, semantic alignment, and temporal coherence, while maintaining high inference efficiency and strong incremental adaptability.
๐ Abstract
Text-to-video generation is an emerging field in generative AI, enabling the creation of realistic, semantically accurate videos from text prompts. While current models achieve impressive visual quality and alignment with input text, they typically rely on static knowledge, making it difficult to incorporate new data without retraining from scratch. To address this limitation, we propose VidCLearn, a continual learning framework for diffusion-based text-to-video generation. VidCLearn features a student-teacher architecture where the student model is incrementally updated with new text-video pairs, and the teacher model helps preserve previously learned knowledge through generative replay. Additionally, we introduce a novel temporal consistency loss to enhance motion smoothness and a video retrieval module to provide structural guidance at inference. Our architecture is also designed to be more computationally efficient than existing models while retaining satisfactory generation performance. Experimental results show VidCLearn's superiority over baseline methods in terms of visual quality, semantic alignment, and temporal coherence.