🤖 AI Summary
Digital circuit representations in AI chip design suffer from modality fragmentation, single-modality bias, and poor generalization. Method: This paper introduces the first implementation-oriented multimodal circuit encoder framework, unifying hardware code, structural graphs, and functional summaries. It systematically identifies and models four intrinsic circuit properties—parallel execution, functional equivalence transformations, multi-stage design, and reusability—and accordingly proposes three core strategies: subcircuit partitioning, cross-modal equivalence-aware self-supervised pretraining, and retrieval-augmented inference (RAI). Contribution/Results: Experiments demonstrate consistent superiority over task-specific state-of-the-art methods across five downstream circuit design tasks. The framework enables zero-shot transfer, significantly enhancing representation universality, generalization capability, and cross-task adaptability.
📝 Abstract
The rapid advancements of AI rely on the support of ICs. However, the growing complexity of digital ICs makes the traditional IC design process costly and time-consuming. In recent years, AI-assisted IC design methods have demonstrated great potential, but most methods are task-specific or focus solely on the circuit structure in graph format, overlooking other circuit modalities with rich functional information. In this paper, we introduce CircuitFusion, the first multimodal and implementation-aware circuit encoder. It encodes circuits into general representations that support different downstream circuit design tasks. To learn from circuits, we propose to fuse three circuit modalities: hardware code, structural graph, and functionality summary. More importantly, we identify four unique properties of circuits: parallel execution, functional equivalent transformation, multiple design stages, and circuit reusability. Based on these properties, we propose new strategies for both the development and application of CircuitFusion: 1) During circuit preprocessing, utilizing the parallel nature of circuits, we split each circuit into multiple sub-circuits based on sequential-element boundaries, each sub-circuit in three modalities. 2) During CircuitFusion pre-training, we introduce three self-supervised tasks that utilize equivalent transformations both within and across modalities. 3) When applying CircuitFusion to downstream tasks, we propose a new retrieval-augmented inference method, which retrieves similar known circuits as a reference for predictions. It improves fine-tuning performance and even enables zero-shot inference. Evaluated on five different circuit design tasks, CircuitFusion consistently outperforms the SOTA supervised method specifically developed for every single task, demonstrating its generalizability and ability to learn circuits' inherent properties.