🤖 AI Summary
To address the high computational cost of full-point forward propagation in implicit neural representation (INR) training, this paper proposes EVOS, an evolutionary sampler that dynamically selects high-contribution sampling points to replace global forward passes. Its core contribution lies in being the first to introduce evolutionary algorithms into INR training acceleration, incorporating three novel mechanisms: sparse fitness evaluation, frequency-domain-guided crossover, and enhanced unbiased mutation—enabling efficient, unbiased, and low-overhead sampling-point optimization. EVOS requires no additional parameters or hardware support. Empirically, it reduces training time by 48%–66% while maintaining or even improving convergence performance, significantly outperforming existing sampling-based acceleration methods and achieving state-of-the-art (SOTA) results.
📝 Abstract
We propose EVOlutionary Selector (EVOS), an efficient training paradigm for accelerating Implicit Neural Representation (INR). Unlike conventional INR training that feeds all samples through the neural network in each iteration, our approach restricts training to strategically selected points, reducing computational overhead by eliminating redundant forward passes. Specifically, we treat each sample as an individual in an evolutionary process, where only those fittest ones survive and merit inclusion in training, adaptively evolving with the neural network dynamics. While this is conceptually similar to Evolutionary Algorithms, their distinct objectives (selection for acceleration vs. iterative solution optimization) require a fundamental redefinition of evolutionary mechanisms for our context. In response, we design sparse fitness evaluation, frequency-guided crossover, and augmented unbiased mutation to comprise EVOS. These components respectively guide sample selection with reduced computational cost, enhance performance through frequency-domain balance, and mitigate selection bias from cached evaluation. Extensive experiments demonstrate that our method achieves approximately 48%-66% reduction in training time while ensuring superior convergence without additional cost, establishing state-of-the-art acceleration among recent sampling-based strategies.