On the Dangers of Bootstrapping Generation for Continual Learning and Beyond

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies distributional drift and performance degradation induced by synthetic-data-based cyclic self-training in continual learning, specifically within the generative experience replay (GER) paradigm, where model reliability and latent-space alignment are systematically compromised. Methodologically, we provide the first statistical proof that synthetic data introduce substantial bias and variance, undermining the consistency of maximum likelihood estimation; we further uncover an implicit collapse phenomenon in mainstream generative models (GANs/VAEs) during iterative self-training. Through rigorous statistical modeling, quantitative measurement of latent-space alignment, and multi-round GER experiments, we empirically demonstrate that all evaluated methods suffer over 60% degradation in latent-space alignment and a 2.3× increase in reconstruction error after just 3–5 self-training cycles. These findings establish theoretical foundations and empirical warnings regarding the safety and stability of GER in continual learning systems.

Technology Category

Application Category

📝 Abstract
The use of synthetically generated data for training models is becoming a common practice. While generated data can augment the training data, repeated training on synthetic data raises concerns about distribution drift and degradation of performance due to contamination of the dataset. We investigate the consequences of this bootstrapping process through the lens of continual learning, drawing a connection to Generative Experience Replay (GER) methods. We present a statistical analysis showing that synthetic data introduces significant bias and variance into training objectives, weakening the reliability of maximum likelihood estimation. We provide empirical evidence showing that popular generative models collapse under repeated training with synthetic data. We quantify this degradation and show that state-of-the-art GER methods fail to maintain alignment in the latent space. Our findings raise critical concerns about the use of synthetic data in continual learning.
Problem

Research questions and friction points this paper is trying to address.

Bootstrapping synthetic data causes distribution drift and performance degradation
Synthetic data introduces bias and variance, weakening maximum likelihood estimation
Generative models collapse under repeated training with synthetic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic data introduces bias and variance
Generative models collapse under repeated training
State-of-the-art GER methods fail in latent space alignment
🔎 Similar Papers
No similar papers found.
Daniil Zverev
Daniil Zverev
PhD student at University of Munich
continual learningmultimodal deep learning
A
A. S. Koepke
Technical University of Munich, MCML; University of Tübingen, Tübingen AI Center
J
Joao F. Henriques
University of Oxford