Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations

📅 2025-05-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional monotonic neural networks rely on bounded activation functions and non-negative weight constraints, suffering from optimization difficulties and limited universal approximation capability. This work breaks this paradigm by proving, for the first time, that convex monotonic activations paired with non-positive weight constraints retain universal approximation power; it further establishes a theoretical equivalence between one-sided saturation in activations and sign constraints on weights. We propose a weight-sign–adaptive activation mechanism that eliminates the need for reparameterization. Theoretically, our framework expands the approximation capacity boundary of monotonic networks. Empirically, the proposed architecture demonstrates improved optimization stability, greater robustness to initialization, and significantly outperforms conventional monotonic MLPs across multiple benchmark tasks.

Technology Category

Application Category

📝 Abstract
Conventional techniques for imposing monotonicity in MLPs by construction involve the use of non-negative weight constraints and bounded activation functions, which pose well-known optimization challenges. In this work, we generalize previous theoretical results, showing that MLPs with non-negative weight constraint and activations that saturate on alternating sides are universal approximators for monotonic functions. Additionally, we show an equivalence between the saturation side in the activations and the sign of the weight constraint. This connection allows us to prove that MLPs with convex monotone activations and non-positive constrained weights also qualify as universal approximators, in contrast to their non-negative constrained counterparts. Our results provide theoretical grounding to the empirical effectiveness observed in previous works while leading to possible architectural simplification. Moreover, to further alleviate the optimization difficulties, we propose an alternative formulation that allows the network to adjust its activations according to the sign of the weights. This eliminates the requirement for weight reparameterization, easing initialization and improving training stability. Experimental evaluation reinforces the validity of the theoretical results, showing that our novel approach compares favourably to traditional monotonic architectures.
Problem

Research questions and friction points this paper is trying to address.

Generalizing universal approximation for monotonic MLPs with non-negative weights and alternating saturation activations
Establishing equivalence between activation saturation side and weight constraint sign
Proposing weight-sign-adaptive activations to ease optimization without reparameterization
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLPs with non-negative weights and alternating saturating activations
Convex monotone activations with non-positive weights
Adjustable activations based on weight signs
🔎 Similar Papers
No similar papers found.