🤖 AI Summary
To address the high computational cost, excessive parameter count, and poor interpretability of deep neural networks in edge and real-time scenarios, this paper proposes Spatially Embedded Neural Networks (SEN): neurons are explicitly embedded as learnable coordinates in Euclidean space, replacing conventional weight matrices; synaptic weights are dynamically generated from pairwise Euclidean distances between neurons. This is the first approach to model neuron positions as differentiable, optimizable parameters while incorporating biologically plausible wiring priors, applicable to both multilayer perceptrons (MLPs) and spiking neural networks (SNNs). Experiments on MNIST show SEN matches the accuracy of fully connected baselines while reducing parameters by over 80%; it further enables structural visualization and topological interpretability. Key contributions include: (1) a parameter-efficient architecture driven by spatial coordinates; (2) a distance-coupled dynamic wiring mechanism; and (3) a unified embedding framework compatible with both artificial and spiking neural networks.
📝 Abstract
The high computational complexity and increasing parameter counts of deep neural networks pose significant challenges for deployment in resource-constrained environments, such as edge devices or real-time systems. To address this, we propose a parameter-efficient neural architecture where neurons are embedded in Euclidean space. During training, their positions are optimized and synaptic weights are determined as the inverse of the spatial distance between connected neurons. These distance-dependent wiring rules replace traditional learnable weight matrices and significantly reduce the number of parameters while introducing a biologically inspired inductive bias: connection strength decreases with spatial distance, reflecting the brain's embedding in three-dimensional space where connections tend to minimize wiring length. We validate this approach for both multi-layer perceptrons and spiking neural networks. Through a series of experiments, we demonstrate that these spatially embedded neural networks achieve a performance competitive with conventional architectures on the MNIST dataset. Additionally, the models maintain performance even at pruning rates exceeding 80% sparsity, outperforming traditional networks with the same number of parameters under similar conditions. Finally, the spatial embedding framework offers an intuitive visualization of the network structure.