🤖 AI Summary
This work identifies distributional drift and performance degradation induced by synthetic-data-based cyclic self-training in continual learning, specifically within the generative experience replay (GER) paradigm, where model reliability and latent-space alignment are systematically compromised. Methodologically, we provide the first statistical proof that synthetic data introduce substantial bias and variance, undermining the consistency of maximum likelihood estimation; we further uncover an implicit collapse phenomenon in mainstream generative models (GANs/VAEs) during iterative self-training. Through rigorous statistical modeling, quantitative measurement of latent-space alignment, and multi-round GER experiments, we empirically demonstrate that all evaluated methods suffer over 60% degradation in latent-space alignment and a 2.3× increase in reconstruction error after just 3–5 self-training cycles. These findings establish theoretical foundations and empirical warnings regarding the safety and stability of GER in continual learning systems.
📝 Abstract
The use of synthetically generated data for training models is becoming a common practice. While generated data can augment the training data, repeated training on synthetic data raises concerns about distribution drift and degradation of performance due to contamination of the dataset. We investigate the consequences of this bootstrapping process through the lens of continual learning, drawing a connection to Generative Experience Replay (GER) methods. We present a statistical analysis showing that synthetic data introduces significant bias and variance into training objectives, weakening the reliability of maximum likelihood estimation. We provide empirical evidence showing that popular generative models collapse under repeated training with synthetic data. We quantify this degradation and show that state-of-the-art GER methods fail to maintain alignment in the latent space. Our findings raise critical concerns about the use of synthetic data in continual learning.