Modeling Closed-loop Analog Matrix Computing Circuits with Interconnect Resistance

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In RRAM-based analog matrix computation (AMC), interconnect resistance severely degrades computational accuracy and simulation efficiency as system scale increases. To address this, we propose the first circuit model that jointly accounts for interconnect resistance and closed-loop feedback. We design a fast numerical solver leveraging the sparsity of the Jacobian matrix and introduce a bias compensation strategy to systematically suppress interconnect-induced errors, revealing a scaling law for optimal bias configuration. Our method enables high-precision simulation of matrix inversion, eigenvector computation, and open-loop matrix-vector multiplication. Compared to SPICE-level simulation, our approach achieves speedups of several orders of magnitude. With bias compensation, matrix inversion error is reduced by over 50%, and eigenvector computation error by over 70%.

Technology Category

Application Category

📝 Abstract
Analog matrix computing (AMC) circuits based on resistive random-access memory (RRAM) have shown strong potential for accelerating matrix operations. However, as matrix size grows, interconnect resistance increasingly degrades computational accuracy and limits circuit scalability. Modeling and evaluating these effects are therefore critical for developing effective mitigation strategies. Traditional SPICE (Simulation Program with Integrated Circuit Emphasis) simulators, which rely on modified nodal analysis, become prohibitively slow for large-scale AMC circuits due to the quadratic growth of nodes and feedback connections. In this work, we model AMC circuits with interconnect resistance for two key operations-matrix inversion (INV) and eigenvector computation (EGV), and propose fast solving algorithms tailored for each case. The algorithms exploit the sparsity of the Jacobian matrix, enabling rapid and accurate solutions. Compared to SPICE, they achieve several orders of magnitude acceleration while maintaining high accuracy. We further extend the approach to open-loop matrix-vector multiplication (MVM) circuits, demonstrating similar efficiency gains. Finally, leveraging these fast solvers, we develop a bias-based compensation strategy that reduces interconnect-induced errors by over 50% for INV and 70% for EGV circuits. It also reveals the scaling behavior of the optimal bias with respect to matrix size and interconnect resistance.
Problem

Research questions and friction points this paper is trying to address.

Modeling interconnect resistance effects on analog matrix computing accuracy and scalability
Developing fast algorithms to replace slow SPICE simulations for large circuits
Creating compensation strategies to reduce interconnect-induced computational errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fast solving algorithms exploiting Jacobian sparsity
Bias-based compensation strategy reducing interconnect errors
Modeling interconnect resistance for matrix operations
🔎 Similar Papers
No similar papers found.
M
Mu Zhou
School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China
J
Junbin Long
Institute for Artificial Intelligence, and School of Integrated Circuits, Peking University, Beijing 100871, China
Y
Yubiao Luo
Institute for Artificial Intelligence, and School of Integrated Circuits, Peking University, Beijing 100871, China
Zhong Sun
Zhong Sun
Peking University
analog computingresistive memorymatrix equation solvingin-memory computing