Towards Mitigating Architecture Overfitting on Distilled Datasets

📅 2023-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dataset distillation suffers from “architecture overfitting”—distilled data optimized for lightweight networks often fail to generalize to larger-capacity target models. Method: We propose a synergistic framework integrating DropPath-based implicit subnet ensembling with multi-scale knowledge distillation. For the first time, we incorporate smoothness design—via DropPath regularization—to enforce behavioral consistency across subnets during distilled data construction, jointly optimizing subnet behavior alignment and cross-architecture generalization assessment. Contribution/Results: Experiments across multi-task and multi-scale settings demonstrate that our method consistently outperforms mainstream baselines when evaluating on larger target networks than those used for distillation; in several cases, it even achieves reverse superiority (i.e., surpassing performance of models trained on real data). The approach exhibits strong generalizability, robustness, and scalability across diverse architectures and tasks.
📝 Abstract
Dataset distillation methods have demonstrated remarkable performance for neural networks trained with very limited training data. However, a significant challenge arises in the form of extit{architecture overfitting}: the distilled training dataset synthesized by a specific network architecture (i.e., training network) generates poor performance when trained by other network architectures (i.e., test networks), especially when the test networks have a larger capacity than the training network. This paper introduces a series of approaches to mitigate this issue. Among them, DropPath renders the large model to be an implicit ensemble of its sub-networks, and knowledge distillation ensures each sub-network acts similarly to the small but well-performing teacher network. These methods, characterized by their smoothing effects, significantly mitigate architecture overfitting. We conduct extensive experiments to demonstrate the effectiveness and generality of our methods. Particularly, across various scenarios involving different tasks and different sizes of distilled data, our approaches significantly mitigate architecture overfitting. Furthermore, our approaches achieve comparable or even superior performance when the test network is larger than the training network.
Problem

Research questions and friction points this paper is trying to address.

Dataset Distillation
Overfitting
Model Architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

DropPath
Knowledge Distillation
Overfitting Reduction
🔎 Similar Papers
No similar papers found.
Xuyang Zhong
Xuyang Zhong
City University of Hong Kong
Deep learning
C
Chen Liu
Department of Computer Science, City University of Hong Kong, Hong Kong SAR, China