Learning from Scratch: Structurally-masked Transformer for Next Generation Lib-free Simulation

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional library-based approaches for power and timing prediction in multi-stage datapaths rely on simplified driver/load characterizations, limiting accuracy and generalizability. Method: We propose an end-to-end neural network framework that operates directly on SPICE netlists. It introduces the first language-inspired, netlist-aware Transformer architecture, combining CNN-Transformer hybrid encoding with node-level topological modeling. Recursive propagation explicitly captures intrinsic gate delays and interconnect coupling effects, while an integrated crosstalk correction subnet refines predictions. The model jointly predicts transient waveforms and propagation delays, enabling full waveform visibility and precise timing alignment. Results: Evaluated on diverse industrial circuits, the method achieves SPICE-level accuracy—timing prediction RMSE remains consistently below 0.0098 ns—with high fidelity and scalability.

Technology Category

Application Category

📝 Abstract
This paper proposes a neural framework for power and timing prediction of multi-stage data path, distinguishing itself from traditional lib-based analytical methods dependent on driver characterization and load simplifications. To the best of our knowledge, this is the first language-based, netlist-aware neural network designed explicitly for standard cells. Our approach employs two pre-trained neural models of waveform prediction and delay estimation that directly infer transient waveforms and propagation delays from SPICE netlists, conditioned on critical physical parameters such as load capacitance, input slew, and gate size. This method accurately captures both intrinsic and coupling-induced delay effects without requiring simplification or interpolation. For multi-stage timing prediction, we implement a recursive propagation strategy where predicted waveforms from each stage feed into subsequent stages, cumulatively capturing delays across the logic chain. This approach ensures precise timing alignment and complete waveform visibility throughout complex signal pathways. The waveform prediction utilizes a hybrid CNN-Transformer architecture with netlist-aware node-level encoding, addressing traditional Transformers' fixed input dimensionality constraints. Additionally, specialized subnetworks separately handle primary delay estimation and crosstalk correction. Experimental results demonstrate SPICE-level accuracy, consistently achieving RMSE below 0.0098 across diverse industrial circuits. The proposed framework provides a scalable, structurally adaptable neural alternative to conventional power and timing engines, demonstrating high fidelity to physical circuit behaviors.
Problem

Research questions and friction points this paper is trying to address.

Predict power and timing for multi-stage data paths without traditional libraries
Infer waveforms and delays from SPICE netlists using neural models
Achieve SPICE-level accuracy with scalable neural framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid CNN-Transformer for waveform prediction
Recursive propagation for multi-stage timing
Netlist-aware neural models without simplifications
🔎 Similar Papers
No similar papers found.
J
Junlang Huang
Sun Yat-Sen University, School of Microelectronics Science and Technology
H
Hao Chen
Sun Yat-Sen University, School of Microelectronics Science and Technology
Zhong Guan
Zhong Guan
PhD of Electrical and Computer Engineering, UCSB
ElectromigrationReliabilitySRAMEDASimulation