Rethinking Nonlinearity: Trainable Gaussian Mixture Modules for Modern Neural Architectures

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional neural networks suffer from rigid nonlinear expressivity due to fixed, hand-crafted activation functions (e.g., ReLU, Softmax). To address this, we propose the Gaussian Mixture Nonlinear Module (GMNM), which replaces static activations with a learnable, differentiable projection onto Gaussian kernels—thereby recasting nonlinearity modeling as a density approximation problem in a metric space. Inspired by Gaussian Mixture Models (GMMs), GMNM incorporates relaxed probabilistic constraints and a parameterized projection mechanism, enabling end-to-end training while maintaining architectural agnosticism across MLPs, CNNs, and Transformers. Extensive experiments on benchmark tasks—including image classification and language modeling—demonstrate consistent improvements in both accuracy and convergence speed. These results validate GMNM’s generality, effectiveness, and computational efficiency, offering a flexible, gradient-based alternative to conventional activation design.

Technology Category

Application Category

📝 Abstract
Neural networks in general, from MLPs and CNNs to attention-based Transformers, are constructed from layers of linear combinations followed by nonlinear operations such as ReLU, Sigmoid, or Softmax. Despite their strength, these conventional designs are often limited in introducing non-linearity by the choice of activation functions. In this work, we introduce Gaussian Mixture-Inspired Nonlinear Modules (GMNM), a new class of differentiable modules that draw on the universal density approximation Gaussian mixture models (GMMs) and distance properties (metric space) of Gaussian kernal. By relaxing probabilistic constraints and adopting a flexible parameterization of Gaussian projections, GMNM can be seamlessly integrated into diverse neural architectures and trained end-to-end with gradient-based methods. Our experiments demonstrate that incorporating GMNM into architectures such as MLPs, CNNs, attention mechanisms, and LSTMs consistently improves performance over standard baselines. These results highlight GMNM's potential as a powerful and flexible module for enhancing efficiency and accuracy across a wide range of machine learning applications.
Problem

Research questions and friction points this paper is trying to address.

Enhancing nonlinearity in neural networks with trainable Gaussian mixture modules
Replacing conventional activation functions through differentiable Gaussian projections
Improving performance across diverse architectures like MLPs, CNNs, and Transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

GMNM modules use Gaussian mixture models for nonlinearity
Relaxed probabilistic constraints enable flexible parameterization
Seamless integration into diverse neural architectures
🔎 Similar Papers
No similar papers found.
W
Weiguo Lu
Guangdong Institute of Intelligence Science and Technology
G
Gangnan Yuan
Great Bay University
H
Hong-kun Zhang
Great Bay University, University of Massachusetts at Amherst
Shangyang Li
Shangyang Li
Peking University
Computational NeuroscienceMachine learning