Alignment Unlocks Complementarity: A Framework for Multiview Circuit Representation Learning

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-view learning for Boolean circuits, structural heterogeneity among graph representations—such as AIGs and XMGs—hinders effective fusion, while self-supervised masked modeling often misidentifies cross-view contextual information as noise. Method: We propose a “function-first alignment” paradigm, establishing functional equivalence as a prerequisite for multi-view self-supervised learning. We design an equivalence alignment loss to project heterogeneous graph structures into a shared, function-aware representation space; subsequently, we perform multi-view masked modeling with a staged curriculum training strategy. Contribution/Results: Our approach significantly enhances model generalization, transforming previously ineffective cross-view masked modeling into a strong pretraining signal. It achieves state-of-the-art (SOTA) performance on logic synthesis and combinational equivalence checking tasks, demonstrating robust transferability across downstream circuit understanding tasks.

Technology Category

Application Category

📝 Abstract
Multiview learning on Boolean circuits holds immense promise, as different graph-based representations offer complementary structural and semantic information. However, the vast structural heterogeneity between views, such as an And-Inverter Graph (AIG) versus an XOR-Majority Graph (XMG), poses a critical barrier to effective fusion, especially for self-supervised techniques like masked modeling. Naively applying such methods fails, as the cross-view context is perceived as noise. Our key insight is that functional alignment is a necessary precondition to unlock the power of multiview self-supervision. We introduce MixGate, a framework built on a principled training curriculum that first teaches the model a shared, function-aware representation space via an Equivalence Alignment Loss. Only then do we introduce a multiview masked modeling objective, which can now leverage the aligned views as a rich, complementary signal. Extensive experiments, including a crucial ablation study, demonstrate that our alignment-first strategy transforms masked modeling from an ineffective technique into a powerful performance driver.
Problem

Research questions and friction points this paper is trying to address.

Overcoming structural heterogeneity between Boolean circuit representations
Enabling effective multiview fusion for self-supervised learning
Transforming cross-view context from noise to complementary signal
Innovation

Methods, ideas, or system contributions that make the work stand out.

Functional alignment enables multiview self-supervision
Equivalence Alignment Loss creates shared representation space
Alignment-first curriculum unlocks masked modeling performance
🔎 Similar Papers
No similar papers found.