Not All Instances Are Equally Valuable: Towards Influence-Weighted Dataset Distillation

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing data distillation methods often overlook intrinsic instance quality disparities, treating redundant or harmful samples equivalently to high-quality ones—thereby limiting distillation performance. To address this, we propose Influence-Weighted Distillation (IWD), the first framework to incorporate influence functions into data distillation. IWD quantifies each real-sample’s gradient-level contribution to the target model’s performance and dynamically assigns sample weights accordingly, enabling quality-aware sample retention and end-to-end synthetic optimization. Designed modularly, IWD is plug-and-play compatible with mainstream distillation pipelines. Extensive experiments demonstrate that IWD substantially enhances distilled dataset quality: it achieves up to a 7.8% absolute accuracy gain across multiple benchmarks and effectively distinguishes beneficial from harmful instances.

Technology Category

Application Category

📝 Abstract
Dataset distillation condenses large datasets into synthetic subsets, achieving performance comparable to training on the full dataset while substantially reducing storage and computation costs. Most existing dataset distillation methods assume that all real instances contribute equally to the process. In practice, real-world datasets contain both informative and redundant or even harmful instances, and directly distilling the full dataset without considering data quality can degrade model performance. In this work, we present Influence-Weighted Distillation IWD, a principled framework that leverages influence functions to explicitly account for data quality in the distillation process. IWD assigns adaptive weights to each instance based on its estimated impact on the distillation objective, prioritizing beneficial data while downweighting less useful or harmful ones. Owing to its modular design, IWD can be seamlessly integrated into diverse dataset distillation frameworks. Our empirical results suggest that integrating IWD tends to improve the quality of distilled datasets and enhance model performance, with accuracy gains of up to 7.8%.
Problem

Research questions and friction points this paper is trying to address.

Dataset distillation methods ignore varying data instance quality
Influence functions quantify instance impact for weighted distillation
Modular framework improves distilled dataset quality and model accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Assigns adaptive weights using influence functions
Prioritizes beneficial data while downweighting harmful instances
Seamlessly integrates into diverse dataset distillation frameworks
🔎 Similar Papers
No similar papers found.
Q
Qiyan Deng
Beijing Institute of Technology
C
Changqian Zheng
Beijing Institute of Technology
L
Lianpeng Qiao
Beijing Institute of Technology
Y
Yuping Wang
Beijing Institute of Technology
Chengliang Chai
Chengliang Chai
Beijing Institute of Technology
Data cleaning and integration
Lei Cao
Lei Cao
Assistant Professor, University of Arizona/Research Scientist, MIT CSAIL
DatabasesMachine learning