EVOS: Efficient Implicit Neural Training via EVOlutionary Selector

📅 2024-12-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost of full-point forward propagation in implicit neural representation (INR) training, this paper proposes EVOS, an evolutionary sampler that dynamically selects high-contribution sampling points to replace global forward passes. Its core contribution lies in being the first to introduce evolutionary algorithms into INR training acceleration, incorporating three novel mechanisms: sparse fitness evaluation, frequency-domain-guided crossover, and enhanced unbiased mutation—enabling efficient, unbiased, and low-overhead sampling-point optimization. EVOS requires no additional parameters or hardware support. Empirically, it reduces training time by 48%–66% while maintaining or even improving convergence performance, significantly outperforming existing sampling-based acceleration methods and achieving state-of-the-art (SOTA) results.

Technology Category

Application Category

📝 Abstract
We propose EVOlutionary Selector (EVOS), an efficient training paradigm for accelerating Implicit Neural Representation (INR). Unlike conventional INR training that feeds all samples through the neural network in each iteration, our approach restricts training to strategically selected points, reducing computational overhead by eliminating redundant forward passes. Specifically, we treat each sample as an individual in an evolutionary process, where only those fittest ones survive and merit inclusion in training, adaptively evolving with the neural network dynamics. While this is conceptually similar to Evolutionary Algorithms, their distinct objectives (selection for acceleration vs. iterative solution optimization) require a fundamental redefinition of evolutionary mechanisms for our context. In response, we design sparse fitness evaluation, frequency-guided crossover, and augmented unbiased mutation to comprise EVOS. These components respectively guide sample selection with reduced computational cost, enhance performance through frequency-domain balance, and mitigate selection bias from cached evaluation. Extensive experiments demonstrate that our method achieves approximately 48%-66% reduction in training time while ensuring superior convergence without additional cost, establishing state-of-the-art acceleration among recent sampling-based strategies.
Problem

Research questions and friction points this paper is trying to address.

Accelerates implicit neural representation training via evolutionary selection
Reduces computational overhead by eliminating redundant forward passes
Ensures superior convergence with 48%-66% training time reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary sample selection for training
Sparse fitness evaluation reduces computation
Frequency-guided crossover and unbiased mutation
🔎 Similar Papers
No similar papers found.
Weixiang Zhang
Weixiang Zhang
Tsinghua University
Neural Representation3D Computer Vision
Shuzhao Xie
Shuzhao Xie
Tsinghua University
GraphicsMultimedia
C
Chengwei Ren
Tsinghua University
S
Siyi Xie
Xi’an Jiaotong University
C
Chen Tang
Chinese University of Hong Kong
Shijia Ge
Shijia Ge
Tsinghua University
Machine LearningAI3DVRoboticsAI4Med
M
Mingzi Wang
Tsinghua University
Z
Zhi Wang
Tsinghua University