🤖 AI Summary
This study characterizes the geometric evolution of decision boundaries in deep random neural networks as network depth increases. Specifically, it establishes rigorous correspondences between boundary volume growth patterns—governed by activation function properties—and their regularity and spectral parameters: non-smooth activations (e.g., Hard Tanh) yield fractal boundaries whose Hausdorff dimension monotonically increases with depth; in contrast, for regular activations (e.g., ReLU), boundary volume dynamics are fully determined by a single computable spectral parameter, exhibiting convergence, constancy, or exponential divergence. Methodologically, the work integrates random field theory, fractal geometry (Hausdorff dimension analysis), high-dimensional excursion set boundary volume estimation, and Monte Carlo simulation. All theoretical predictions are rigorously validated via numerical experiments. This work provides the first quantitative mapping from the differential regularity of activation functions to the emergent geometric behavior of deep network decision boundaries.
📝 Abstract
We study the geometric properties of random neural networks by investigating the boundary volumes of their excursion sets for different activation functions, as the depth increases. More specifically, we show that, for activations which are not very regular (e.g., the Heaviside step function), the boundary volumes exhibit fractal behavior, with their Hausdorff dimension monotonically increasing with the depth. On the other hand, for activations which are more regular (e.g., ReLU, logistic and $ anh$), as the depth increases, the expected boundary volumes can either converge to zero, remain constant or diverge exponentially, depending on a single spectral parameter which can be easily computed. Our theoretical results are confirmed in some numerical experiments based on Monte Carlo simulations.