Sensitivity-Based Layer Insertion for Residual and Feedforward Neural Networks

📅 2023-11-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural network training typically requires predefined fixed architectures and extensive manual hyperparameter tuning. Method: This paper proposes a dynamic layer insertion method that operates during training. It leverages first-order sensitivity analysis of the objective function with respect to virtual layer parameters, integrating virtual parameter modeling and constraint-based optimization heuristics to enable demand-driven, interpretable layer evolution. Contribution/Results: To our knowledge, this is the first work to incorporate constrained optimization into dynamic architecture search, supporting both fully connected and residual networks while remaining compatible with common activation functions (e.g., ReLU, LeakyReLU). Experiments demonstrate significantly accelerated loss decay and substantial computational savings compared to static baselines of equivalent capacity—without requiring retraining or human intervention. Open-sourced code confirms the method’s effectiveness and generalizability across diverse tasks.
📝 Abstract
The training of neural networks requires tedious and often manual tuning of the network architecture. We propose a systematic method to insert new layers during the training process, which eliminates the need to choose a fixed network size before training. Our technique borrows techniques from constrained optimization and is based on first-order sensitivity information of the objective with respect to the virtual parameters that additional layers, if inserted, would offer. We consider fully connected feedforward networks with selected activation functions as well as residual neural networks. In numerical experiments, the proposed sensitivity-based layer insertion technique exhibits improved training decay, compared to not inserting the layer. Furthermore, the computational effort is reduced in comparison to inserting the layer from the beginning. The code is available at url{https://github.com/LeonieKreis/layer_insertion_sensitivity_based}.
Problem

Research questions and friction points this paper is trying to address.

Automating neural network architecture tuning during training
Eliminating fixed network size choice before training starts
Improving performance with sensitivity-based layer insertion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic layer insertion during training process
Uses first-order sensitivity for loss function
Applicable to various neural network architectures
🔎 Similar Papers
No similar papers found.