GenEDA: Unleashing Generative Reasoning on Netlist via Multimodal Encoder-Decoder Aligned Foundation Model

📅 2025-04-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing pre-trained circuit models employ disjoint encoder (graph-structured prediction) and decoder (text generation) architectures, operating in incompatible modalities and latent spaces, hindering joint optimization. Method: We propose the first circuit encoder–decoder alignment foundation model, integrating multimodal alignment, GNN-based netlist encoding, cross-modal projection learning, and LLM interface adaptation to jointly optimize both components within a unified latent space. Contribution/Results: The model bridges gate-level netlist graph structures with LLM semantic spaces, enabling reverse generation of multi-granularity functional descriptions from netlists. It supports both open-source trainable and commercial frozen LLMs, establishing a novel netlist-driven generative reasoning paradigm. Experiments demonstrate substantial performance gains over baseline models—including GPT-4o and DeepSeek-V3—on three-tier functional description generation tasks, surpassing traditional gate-level prediction limitations.

Technology Category

Application Category

📝 Abstract
The success of foundation AI has motivated the research of circuit foundation models, which are customized to assist the integrated circuit (IC) design process. However, existing pre-trained circuit models are typically limited to standalone encoders for predictive tasks or decoders for generative tasks. These two model types are developed independently, operate on different circuit modalities, and reside in separate latent spaces, which restricts their ability to complement each other for more advanced applications. In this work, we present GenEDA, the first framework that aligns circuit encoders with decoders within a shared latent space. GenEDA bridges the gap between graph-based circuit representations and text-based large language models (LLMs), enabling communication between their respective latent spaces. To achieve the alignment, we propose two paradigms that support both open-source trainable LLMs and commercial frozen LLMs. Built on this aligned architecture, GenEDA enables three unprecedented generative reasoning tasks over netlists, where the model reversely generates the high-level functionality from low-level netlists in different granularities. These tasks extend traditional gate-type prediction to direct generation of full-circuit functionality. Experiments demonstrate that GenEDA significantly boosts advanced LLMs' (e.g., GPT-4o and DeepSeek-V3) performance in all tasks.
Problem

Research questions and friction points this paper is trying to address.

Aligns circuit encoders and decoders in shared space
Bridges graph-based circuits and text-based LLMs
Enables generative reasoning tasks for netlists
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns circuit encoders and decoders in shared space
Bridges graph-based circuits with text-based LLMs
Supports both open-source and commercial frozen LLMs
🔎 Similar Papers
No similar papers found.
Wenji Fang
Wenji Fang
Hong Kong University of Science and Technology
Electronic Design AutomationAI for EDAHardware Formal Verification
J
Jing Wang
Hong Kong University of Science and Technology
Y
Yao Lu
Hong Kong University of Science and Technology
S
Shang Liu
Hong Kong University of Science and Technology
Zhiyao Xie
Zhiyao Xie
Assistant Professor, Hong Kong University of Science and Technology
EDAMachine learningVLSI circuits and systems