Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low efficiency of co-pruning encoder and decoder in autoencoders. We propose an evolutionary dual-path pruning method guided by layer-wise activation statistics. Two novel activation-guided mutation operators are designed and systematically evaluated under both standard training and a spatial co-evolutionary framework. Key findings: under standard training, activation-driven pruning significantly outperforms random pruning—yielding models with fewer parameters, faster inference, and reconstruction quality comparable to full-parameter baselines; conversely, within high-dimensional co-evolutionary populations, random pruning proves more robust—revealing a critical influence of population dimensionality on pruning guidance efficacy and demonstrating that high-dimensional co-evolution inherently promotes pruning uniformity. To our knowledge, this is the first work integrating dynamic activation modeling with co-evolutionary pruning for autoencoders, establishing a new paradigm for efficient autoencoder compression.

Technology Category

Application Category

📝 Abstract
This study explores a novel approach to neural network pruning using evolutionary computation, focusing on simultaneously pruning the encoder and decoder of an autoencoder. We introduce two new mutation operators that use layer activations to guide weight pruning. Our findings reveal that one of these activation-informed operators outperforms random pruning, resulting in more efficient autoencoders with comparable performance to canonically trained models. Prior work has established that autoencoder training is effective and scalable with a spatial coevolutionary algorithm that cooperatively coevolves a population of encoders with a population of decoders, rather than one autoencoder. We evaluate how the same activity-guided mutation operators transfer to this context. We find that random pruning is better than guided pruning, in the coevolutionary setting. This suggests activation-based guidance proves more effective in low-dimensional pruning environments, where constrained sample spaces can lead to deviations from true uniformity in randomization. Conversely, population-driven strategies enhance robustness by expanding the total pruning dimensionality, achieving statistically uniform randomness that better preserves system dynamics. We experiment with pruning according to different schedules and present best combinations of operator and schedule for the canonical and coevolving populations cases.
Problem

Research questions and friction points this paper is trying to address.

Evolutionary pruning for autoencoder encoder-decoder optimization
Activation-guided mutation operators vs random pruning efficiency
Population-driven strategies enhance pruning robustness in coevolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary computation guides autoencoder pruning
Layer activations inform new mutation operators
Population-driven strategies enhance pruning robustness
🔎 Similar Papers
No similar papers found.
S
Steven Jorgensen
MIT, USA
Erik Hemberg
Erik Hemberg
Research Scientist, MIT CSAIL
Artificial IntelligenceMachine LearningEvolutionary Computation
J
J. Toutouh
ITIS UMA, University of Malaga, Spain
U
Una-May O’Reilly
MIT, USA