Condensed Data Expansion Using Model Inversion for Knowledge Distillation

📅 2024-08-25
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In knowledge distillation, performance degradation of student models arises from information scarcity in compressed datasets. To address this, we propose a teacher-model inversion-based method that generates complementary synthetic data to strategically augment the compressed dataset, thereby better approximating the original data distribution. This work is the first to integrate model inversion techniques into compressed-data expansion, enabling effective single-sample-per-class and few-shot distillation—scenarios where conventional direct distillation fails. By jointly optimizing synthetic-data distribution alignment and knowledge distillation, our approach achieves substantial accuracy gains across multiple datasets and model architectures: it significantly outperforms distillation using only compressed data, and improves upon standard model-inversion distillation by up to 11.4%. The method bridges a critical gap between data efficiency and knowledge transfer fidelity, advancing the state of the art in compact yet high-fidelity distillation.

Technology Category

Application Category

📝 Abstract
Condensed datasets offer a compact representation of larger datasets, but training models directly on them or using them to enhance model performance through knowledge distillation (KD) can result in suboptimal outcomes due to limited information. To address this, we propose a method that expands condensed datasets using model inversion, a technique for generating synthetic data based on the impressions of a pre-trained model on its training data. This approach is particularly well-suited for KD scenarios, as the teacher model is already pre-trained and retains knowledge of the original training data. By creating synthetic data that complements the condensed samples, we enrich the training set and better approximate the underlying data distribution, leading to improvements in student model accuracy during knowledge distillation. Our method demonstrates significant gains in KD accuracy compared to using condensed datasets alone and outperforms standard model inversion-based KD methods by up to 11.4% across various datasets and model architectures. Importantly, it remains effective even when using as few as one condensed sample per class, and can also enhance performance in few-shot scenarios where only limited real data samples are available.
Problem

Research questions and friction points this paper is trying to address.

Enhancing knowledge distillation accuracy using expanded condensed datasets
Addressing limited information in condensed datasets through model inversion
Improving student model performance with synthetic data generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expanding condensed datasets using model inversion technique
Generating synthetic data to complement condensed samples
Enhancing knowledge distillation accuracy with enriched datasets
🔎 Similar Papers
No similar papers found.