🤖 AI Summary
Existing diffusion-based dataset distillation methods suffer from inaccurate distribution matching, substantial noise bias, and decoupled sampling and optimization. To address these issues, this paper proposes an efficient distillation framework grounded in DDIM inversion. Our approach first maps the full training dataset into a highly Gaussian latent space via DDIM inversion, ensuring structural consistency. We then introduce an efficient latent-distribution alignment sampling strategy that jointly optimizes generative fidelity and representational quality. Furthermore, we integrate diffusion model fine-tuning with controllable generation mechanisms to enhance distilled sample diversity and faithfulness. Extensive experiments across multiple network architectures demonstrate consistent and significant improvements over state-of-the-art distillation methods, achieving average classification accuracy gains of 2.1–4.7 percentage points. The implementation is publicly available.
📝 Abstract
Recent deep learning models demand larger datasets, driving the need for dataset distillation to create compact, cost-efficient datasets while maintaining performance. Due to the powerful image generation capability of diffusion, it has been introduced to this field for generating distilled images. In this paper, we systematically investigate issues present in current diffusion-based dataset distillation methods, including inaccurate distribution matching, distribution deviation with random noise, and separate sampling. Building on this, we propose D^3HR, a novel diffusion-based framework to generate distilled datasets with high representativeness. Specifically, we adopt DDIM inversion to map the latents of the full dataset from a low-normality latent domain to a high-normality Gaussian domain, preserving information and ensuring structural consistency to generate representative latents for the distilled dataset. Furthermore, we propose an efficient sampling scheme to better align the representative latents with the high-normality Gaussian distribution. Our comprehensive experiments demonstrate that D^3HR can achieve higher accuracy across different model architectures compared with state-of-the-art baselines in dataset distillation. Source code: https://github.com/lin-zhao-resoLve/D3HR.