🤖 AI Summary
To address the inefficiency of MLPs (due to excessive parameters) and CNNs (due to limited receptive fields) in modeling high-dimensional data, this paper proposes the Multi-Scale Tensor Summation (MTS) Layer—a novel, learnable foundational neural network layer. The MTS Layer integrates multi-scale tensor summation, Tucker-like modal products, and a multi-head gated nonlinear unit (MHG), thereby directly embedding tensor decomposition principles into the backbone architecture—not merely for post-hoc model compression. This design enables parameter sharing, dynamic receptive field expansion, and efficient nonlinear modeling. Built upon the MTS Layer, MTSNet achieves state-of-the-art performance on image classification, signal recovery, and compression tasks—outperforming both conventional MLPs and CNNs. Moreover, on vision benchmarks, MTSNet surpasses advanced Transformer models in performance-efficiency trade-offs while operating at lower computational complexity.
📝 Abstract
Multilayer perceptrons (MLP), or fully connected artificial neural networks, are known for performing vector-matrix multiplications using learnable weight matrices; however, their practical application in many machine learning tasks, especially in computer vision, can be limited due to the high dimensionality of input-output pairs at each layer. To improve efficiency, convolutional operators have been utilized to facilitate weight sharing and local connections, yet they are constrained by limited receptive fields. In this paper, we introduce Multiscale Tensor Summation (MTS) Factorization, a novel neural network operator that implements tensor summation at multiple scales, where each tensor to be summed is obtained through Tucker-decomposition-like mode products. Unlike other tensor decomposition methods in the literature, MTS is not introduced as a network compression tool; instead, as a new backbone neural layer. MTS not only reduces the number of parameters required while enhancing the efficiency of weight optimization compared to traditional dense layers (i.e., unfactorized weight matrices in MLP layers), but it also demonstrates clear advantages over convolutional layers. The proof-of-concept experimental comparison of the proposed MTS networks with MLPs and Convolutional Neural Networks (CNNs) demonstrates their effectiveness across various tasks, such as classification, compression, and signal restoration. Additionally, when integrated with modern non-linear units such as the multi-head gate (MHG), also introduced in this study, the corresponding neural network, MTSNet, demonstrates a more favorable complexity-performance tradeoff compared to state-of-the-art transformers in various computer vision applications. The software implementation of the MTS layer and the corresponding MTS-based networks, MTSNets, is shared at https://github.com/mehmetyamac/MTSNet.