VeLU: Variance-enhanced Learning Unit for Deep Neural Networks

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing activation functions (e.g., ReLU, GELU) lack dynamic adaptability to input feature statistics, leading to unstable gradient flow and covariate shift. To address this, we propose VeLU (Variance-enhanced Learning Unit), the first activation function that performs real-time, variance-based adaptive scaling of inputs. VeLU employs a smooth ArcTan-Sin nonlinear transformation to model input-response behavior, incorporates Wasserstein-2 distance regularization to constrain the distribution of variance estimates, and integrates a differentiable variance estimation module for end-to-end optimization. Evaluated across six backbone architectures—including ViT-B16 and ResNet50—and six vision benchmarks, VeLU consistently outperforms ReLU, GELU, and Swish, achieving average Top-1 accuracy gains of 0.8%–1.3%. Moreover, it significantly improves training stability and generalization capability.

Technology Category

Application Category

📝 Abstract
Activation functions are fundamental in deep neural networks and directly impact gradient flow, optimization stability, and generalization. Although ReLU remains standard because of its simplicity, it suffers from vanishing gradients and lacks adaptability. Alternatives like Swish and GELU introduce smooth transitions, but fail to dynamically adjust to input statistics. We propose VeLU, a Variance-enhanced Learning Unit as an activation function that dynamically scales based on input variance by integrating ArcTan-Sin transformations and Wasserstein-2 regularization, effectively mitigating covariate shifts and stabilizing optimization. Extensive experiments on ViT_B16, VGG19, ResNet50, DenseNet121, MobileNetV2, and EfficientNetB3 confirm VeLU's superiority over ReLU, ReLU6, Swish, and GELU on six vision benchmarks. The codes of VeLU are publicly available on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Dynamic activation function for gradient and stability issues
Mitigates covariate shifts via variance-based scaling
Outperforms ReLU and alternatives on vision benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic scaling based on input variance
Integrates ArcTan-Sin transformations
Uses Wasserstein-2 regularization
🔎 Similar Papers
No similar papers found.