Dataset Distillation as Data Compression: A Rate-Utility Perspective

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing storage efficiency and model performance in dataset distillation, this work formulates the problem as a rate–utility joint optimization task for data compression. We propose a unified framework that employs bits-per-class (bpc) as a standardized storage metric, synthesizes samples via learnable latent codes, and jointly minimizes storage cost and distillation loss through a lightweight differentiable decoder, quantized entropy estimation, and Lagrangian optimization. The method is agnostic to backbone architectures and distillation objectives, requiring no predefined number of synthetic samples or class distribution assumptions. Evaluated on CIFAR-10, CIFAR-100, and ImageNet-128, our approach achieves up to 170× higher compression ratio than standard distillation baselines while preserving comparable classification accuracy—significantly improving the rate–utility trade-off curve.

Technology Category

Application Category

📝 Abstract
Driven by the ``scale-is-everything'' paradigm, modern machine learning increasingly demands ever-larger datasets and models, yielding prohibitive computational and storage requirements. Dataset distillation mitigates this by compressing an original dataset into a small set of synthetic samples, while preserving its full utility. Yet, existing methods either maximize performance under fixed storage budgets or pursue suitable synthetic data representations for redundancy removal, without jointly optimizing both objectives. In this work, we propose a joint rate-utility optimization method for dataset distillation. We parameterize synthetic samples as optimizable latent codes decoded by extremely lightweight networks. We estimate the Shannon entropy of quantized latents as the rate measure and plug any existing distillation loss as the utility measure, trading them off via a Lagrange multiplier. To enable fair, cross-method comparisons, we introduce bits per class (bpc), a precise storage metric that accounts for sample, label, and decoder parameter costs. On CIFAR-10, CIFAR-100, and ImageNet-128, our method achieves up to $170 imes$ greater compression than standard distillation at comparable accuracy. Across diverse bpc budgets, distillation losses, and backbone architectures, our approach consistently establishes better rate-utility trade-offs.
Problem

Research questions and friction points this paper is trying to address.

Compress large datasets into small synthetic samples
Jointly optimize storage and performance in distillation
Improve rate-utility trade-offs across diverse datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint rate-utility optimization for dataset distillation
Parameterized synthetic samples as optimizable latent codes
Bits per class metric for precise storage measurement
🔎 Similar Papers
No similar papers found.