Video Dataset Condensation with Diffusion Models

📅 2025-05-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of rapidly growing video data volumes and prohibitively high training costs, this paper introduces the first efficient dataset distillation method specifically designed for video data. Our approach comprises two key innovations: (1) a Video Spatio-Temporal U-Net (VST-UNet) that enhances the spatio-temporal diversity and representational fidelity of synthesized videos; and (2) a training-free, temporal-aware clustering-based distillation algorithm (TAC-DT), enabling efficient, unsupervised selection of representative video samples. Leveraging a video diffusion model, our method generates high-fidelity synthetic video subsets for distillation. Evaluated on four standard benchmarks, it achieves an average performance gain of 10.61% over baseline models—substantially outperforming existing data distillation techniques—and establishes a new state-of-the-art benchmark for video dataset distillation.

Technology Category

Application Category

📝 Abstract
In recent years, the rapid expansion of dataset sizes and the increasing complexity of deep learning models have significantly escalated the demand for computational resources, both for data storage and model training. Dataset distillation has emerged as a promising solution to address this challenge by generating a compact synthetic dataset that retains the essential information from a large real dataset. However, existing methods often suffer from limited performance and poor data quality, particularly in the video domain. In this paper, we focus on video dataset distillation by employing a video diffusion model to generate high-quality synthetic videos. To enhance representativeness, we introduce Video Spatio-Temporal U-Net (VST-UNet), a model designed to select a diverse and informative subset of videos that effectively captures the characteristics of the original dataset. To further optimize computational efficiency, we explore a training-free clustering algorithm, Temporal-Aware Cluster-based Distillation (TAC-DT), to select representative videos without requiring additional training overhead. We validate the effectiveness of our approach through extensive experiments on four benchmark datasets, demonstrating performance improvements of up to (10.61%) over the state-of-the-art. Our method consistently outperforms existing approaches across all datasets, establishing a new benchmark for video dataset distillation.
Problem

Research questions and friction points this paper is trying to address.

Addresses computational demands in deep learning via dataset distillation
Improves video dataset distillation quality with diffusion models
Enhances efficiency with training-free clustering for video selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses video diffusion model for synthetic videos
Introduces VST-UNet for diverse video selection
Applies TAC-DT for training-free clustering
🔎 Similar Papers
No similar papers found.