🤖 AI Summary
Conventional library-based approaches for power and timing prediction in multi-stage datapaths rely on simplified driver/load characterizations, limiting accuracy and generalizability.
Method: We propose an end-to-end neural network framework that operates directly on SPICE netlists. It introduces the first language-inspired, netlist-aware Transformer architecture, combining CNN-Transformer hybrid encoding with node-level topological modeling. Recursive propagation explicitly captures intrinsic gate delays and interconnect coupling effects, while an integrated crosstalk correction subnet refines predictions. The model jointly predicts transient waveforms and propagation delays, enabling full waveform visibility and precise timing alignment.
Results: Evaluated on diverse industrial circuits, the method achieves SPICE-level accuracy—timing prediction RMSE remains consistently below 0.0098 ns—with high fidelity and scalability.
📝 Abstract
This paper proposes a neural framework for power and timing prediction of multi-stage data path, distinguishing itself from traditional lib-based analytical methods dependent on driver characterization and load simplifications. To the best of our knowledge, this is the first language-based, netlist-aware neural network designed explicitly for standard cells. Our approach employs two pre-trained neural models of waveform prediction and delay estimation that directly infer transient waveforms and propagation delays from SPICE netlists, conditioned on critical physical parameters such as load capacitance, input slew, and gate size. This method accurately captures both intrinsic and coupling-induced delay effects without requiring simplification or interpolation. For multi-stage timing prediction, we implement a recursive propagation strategy where predicted waveforms from each stage feed into subsequent stages, cumulatively capturing delays across the logic chain. This approach ensures precise timing alignment and complete waveform visibility throughout complex signal pathways. The waveform prediction utilizes a hybrid CNN-Transformer architecture with netlist-aware node-level encoding, addressing traditional Transformers' fixed input dimensionality constraints. Additionally, specialized subnetworks separately handle primary delay estimation and crosstalk correction. Experimental results demonstrate SPICE-level accuracy, consistently achieving RMSE below 0.0098 across diverse industrial circuits. The proposed framework provides a scalable, structurally adaptable neural alternative to conventional power and timing engines, demonstrating high fidelity to physical circuit behaviors.