🤖 AI Summary
To address the challenges of rapidly growing video data volumes and prohibitively high training costs, this paper introduces the first efficient dataset distillation method specifically designed for video data. Our approach comprises two key innovations: (1) a Video Spatio-Temporal U-Net (VST-UNet) that enhances the spatio-temporal diversity and representational fidelity of synthesized videos; and (2) a training-free, temporal-aware clustering-based distillation algorithm (TAC-DT), enabling efficient, unsupervised selection of representative video samples. Leveraging a video diffusion model, our method generates high-fidelity synthetic video subsets for distillation. Evaluated on four standard benchmarks, it achieves an average performance gain of 10.61% over baseline models—substantially outperforming existing data distillation techniques—and establishes a new state-of-the-art benchmark for video dataset distillation.
📝 Abstract
In recent years, the rapid expansion of dataset sizes and the increasing complexity of deep learning models have significantly escalated the demand for computational resources, both for data storage and model training. Dataset distillation has emerged as a promising solution to address this challenge by generating a compact synthetic dataset that retains the essential information from a large real dataset. However, existing methods often suffer from limited performance and poor data quality, particularly in the video domain. In this paper, we focus on video dataset distillation by employing a video diffusion model to generate high-quality synthetic videos. To enhance representativeness, we introduce Video Spatio-Temporal U-Net (VST-UNet), a model designed to select a diverse and informative subset of videos that effectively captures the characteristics of the original dataset. To further optimize computational efficiency, we explore a training-free clustering algorithm, Temporal-Aware Cluster-based Distillation (TAC-DT), to select representative videos without requiring additional training overhead. We validate the effectiveness of our approach through extensive experiments on four benchmark datasets, demonstrating performance improvements of up to (10.61%) over the state-of-the-art. Our method consistently outperforms existing approaches across all datasets, establishing a new benchmark for video dataset distillation.