HTMA-Net: Towards Multiplication-Avoiding Neural Networks via Hadamard Transform and In-Memory Computing

๐Ÿ“… 2025-09-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the excessive multiplication overhead of deep neural networks on energy-constrained edge devices, this paper proposes an efficient architecture integrating Hadamard transformation with multiplication-free SRAM-based in-memory computing (IMC). The core contribution is the first incorporation of Hadamard transforms into the SRAM IMC paradigm, realized via a hybrid transformation unit that selectively replaces convolutional layersโ€”enabling feature transformation without introducing any additional multiplications. This approach achieves structural model compression on mainstream architectures (e.g., ResNet), substantially reducing both computational complexity and parameter count. Experimental evaluations on CIFAR-10, CIFAR-100, and Tiny ImageNet demonstrate up to 52% reduction in multiplication operations with negligible accuracy degradation (<0.5%). The method thus delivers a deployable, energy-efficient paradigm for low-power edge AI, balancing computational efficiency and inference accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Reducing the cost of multiplications is critical for efficient deep neural network deployment, especially in energy-constrained edge devices. In this work, we introduce HTMA-Net, a novel framework that integrates the Hadamard Transform (HT) with multiplication-avoiding (MA) SRAM-based in-memory computing to reduce arithmetic complexity while maintaining accuracy. Unlike prior methods that only target multiplications in convolutional layers or focus solely on in-memory acceleration, HTMA-Net selectively replaces intermediate convolutions with Hybrid Hadamard-based transform layers whose internal convolutions are implemented via multiplication-avoiding in-memory operations. We evaluate HTMA-Net on ResNet-18 using CIFAR-10, CIFAR-100, and Tiny ImageNet, and provide a detailed comparison against regular, MF-only, and HT-only variants. Results show that HTMA-Net eliminates up to 52% of multiplications compared to baseline ResNet-18, ResNet-20, and ResNet-50 models, while achieving comparable accuracy in evaluation and significantly reducing computational complexity and the number of parameters. Our results demonstrate that combining structured Hadamard transform layers with SRAM-based in-memory computing multiplication-avoiding operators is a promising path towards efficient deep learning architectures.
Problem

Research questions and friction points this paper is trying to address.

Reducing multiplication operations in neural networks
Implementing Hadamard Transform with in-memory computing
Maintaining accuracy while decreasing computational complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Hadamard Transform to replace convolutions
Implements multiplication-avoiding in-memory computing operations
Combines structured transforms with SRAM-based computing
๐Ÿ”Ž Similar Papers
No similar papers found.
Emadeldeen Hamdan
Emadeldeen Hamdan
Ph.D Student, Department of Electrical and Computer Engineering, University of Illinois Chicago
Signal ProcessingData Science
A
Ahmet Enis Cetin
Electrical and Computer Engineering Department, University of Illinois Chicago, Chicago, IL, USA