🤖 AI Summary
Balancing training performance and generalization remains challenging in deep neural network optimization. To address this, we propose a cyclic optimization framework that jointly updates model parameters and input data. Our core innovation is the Iterative Constructive Perturbation (ICP) mechanism: it generates input perturbations guided by model loss, while integrating self-distillation and intermediate-layer feature alignment to establish a bidirectional model–data adaptation paradigm. This approach unifies loss-driven input reconstruction with progressive knowledge transfer, effectively mitigating overfitting and training stagnation. Extensive experiments across diverse training regimes—including standard supervised learning, label-noise robustness, and few-shot learning—demonstrate consistent improvements in both accuracy and generalization. The framework exhibits strong robustness and broad applicability, validating its effectiveness beyond specific task assumptions.
📝 Abstract
Deep Neural Networks have achieved remarkable achievements across various domains, however balancing performance and generalization still remains a challenge while training these networks. In this paper, we propose a novel framework that uses a cyclic optimization strategy to concurrently optimize the model and its input data for better training, rethinking the traditional training paradigm. Central to our approach is Iterative Constructive Perturbation (ICP), which leverages the model's loss to iteratively perturb the input, progressively constructing an enhanced representation over some refinement steps. This ICP input is then fed back into the model to produce improved intermediate features, which serve as a target in a self-distillation framework against the original features. By alternately altering the model's parameters to the data and the data to the model, our method effectively addresses the gap between fitting and generalization, leading to enhanced performance. Extensive experiments demonstrate that our approach not only mitigates common performance bottlenecks in neural networks but also demonstrates significant improvements across training variations.