Hardware-Adaptive and Superlinear-Capacity Memristor-based Associative Memory

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Memristor-based associative memory systems suffer from low hardware defect tolerance, limited storage capacity, and challenges in analog-mode operation. To address these bottlenecks, this paper proposes an adaptive multilayer Hopfield network architecture implemented on integrated memristor hardware. Its core innovation is a hardware-adaptive learning algorithm that simultaneously enhances robustness against device defects and enables superlinear capacity scaling—achieving exponents of (N^{1.49}) (binary mode) and (N^{1.74}) (continuous mode). The architecture integrates crossbar arrays, synchronous parallel state updates, and online training, supporting configurable hidden layers and scalable multilayer extension. Experimental results demonstrate that, under 50% device failure rate, the effective capacity reaches three times that of state-of-the-art methods; energy consumption is reduced by 8.8× and latency by 99.7%. Moreover, the system achieves high-fidelity pattern recall in both binary and continuous modes.

Technology Category

Application Category

📝 Abstract
Brain-inspired computing aims to mimic cognitive functions like associative memory, the ability to recall complete patterns from partial cues. Memristor technology offers promising hardware for such neuromorphic systems due to its potential for efficient in-memory analog computing. Hopfield Neural Networks (HNNs) are a classic model for associative memory, but implementations on conventional hardware suffer from efficiency bottlenecks, while prior memristor-based HNNs faced challenges with vulnerability to hardware defects due to offline training, limited storage capacity, and difficulty processing analog patterns. Here we introduce and experimentally demonstrate on integrated memristor hardware a new hardware-adaptive learning algorithm for associative memories that significantly improves defect tolerance and capacity, and naturally extends to scalable multilayer architectures capable of handling both binary and continuous patterns. Our approach achieves 3x effective capacity under 50% device faults compared to state-of-the-art methods. Furthermore, its extension to multilayer architectures enables superlinear capacity scaling ((propto N^{1.49} for binary patterns) and effective recalling of continuous patterns (propto N^{1.74} scaling), as compared to linear capacity scaling for previous HNNs. It also provides flexibility to adjust capacity by tuning hidden neurons for the same-sized patterns. By leveraging the massive parallelism of the hardware enabled by synchronous updates, it reduces energy by 8.8x and latency by 99.7% for 64-dimensional patterns over asynchronous schemes, with greater improvements at scale. This promises the development of more reliable memristor-based associative memory systems and enables new applications research due to the significantly improved capacity, efficiency, and flexibility.
Problem

Research questions and friction points this paper is trying to address.

Enhancing defect tolerance and capacity in memristor-based associative memory
Enabling superlinear capacity scaling for binary and continuous patterns
Reducing energy and latency through hardware-adaptive learning algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware-adaptive learning algorithm for defect tolerance
Multilayer architecture enabling superlinear capacity scaling
Synchronous updates reducing energy and latency significantly
🔎 Similar Papers
No similar papers found.
C
Chengping He
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
Mingrui Jiang
Mingrui Jiang
PhD candidate, The University of Hong Kong
K
Keyi Shan
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
S
Szu-Hao Yang
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
Zefan Li
Zefan Li
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
S
Shengbo Wang
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
Giacomo Pedretti
Giacomo Pedretti
Research Scientist, Hewlett Packard Laboratories
AI acceleratorsIn-memory computingNeuromorphic ComputingAnalog computingEmerging memories
J
Jim Ignowski
Hewlett Packard Labs, Hewlett Packard Enterprise, Milpitas, CA, USA
C
Can Li
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China; Center for Advanced Semiconductor and Integrated Circuit, The University of Hong Kong, Hong Kong SAR, China