Exploiting Function-Family Structure in Analog Circuit Optimization

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Analog circuit optimization is typically treated as a black-box search, yet device physics—governed by exponential laws, rational functions, and region transitions—imposes strong structured priors on performance mappings. Conventional Gaussian process surrogates, assuming global smoothness and stationarity, fail to accurately model highly nonlinear circuit responses under small-sample regimes (50–100 evaluations). Method: We propose the Circuit Prior Network (CPN), the first method to encode physical laws as learnable, structured priors—eliminating manual modeling. CPN builds a discrete posterior based on TabPFN v2 and introduces Direct Expected Improvement (DEI) for exact acquisition computation under non-Gaussian, non-stationary settings. Contribution/Results: On six circuit optimization tasks, CPN outperforms 25 baselines: achieving R² = 0.99 in small-sample regression, improving final performance by 1.05–3.81×, and accelerating convergence by 3.34–11.89×. This advances analog circuit optimization from empirical black-box tuning toward systematic, physics-informed structural identification.

Technology Category

Application Category

📝 Abstract
Analog circuit optimization is typically framed as black-box search over arbitrary smooth functions, yet device physics constrains performance mappings to structured families: exponential device laws, rational transfer functions, and regime-dependent dynamics. Off-the-shelf Gaussian-process surrogates impose globally smooth, stationary priors that are misaligned with these regime-switching primitives and can severely misfit highly nonlinear circuits at realistic sample sizes (50--100 evaluations). We demonstrate that pre-trained tabular models encoding these primitives enable reliable optimization without per-circuit engineering. Circuit Prior Network (CPN) combines a tabular foundation model (TabPFN v2) with Direct Expected Improvement (DEI), computing expected improvement exactly under discrete posteriors rather than Gaussian approximations. Across 6 circuits and 25 baselines, structure-matched priors achieve $R^2 approx 0.99$ in small-sample regimes where GP-Matérn attains only $R^2 = 0.16$ on Bandgap, deliver $1.05$--$3.81 imes$ higher FoM with $3.34$--$11.89 imes$ fewer iterations, and suggest a shift from hand-crafting models as priors toward systematic physics-informed structure identification. Our code will be made publicly available upon paper acceptance.
Problem

Research questions and friction points this paper is trying to address.

Optimizing analog circuits using structured function families
Improving small-sample surrogate modeling for nonlinear circuits
Reducing iterations and engineering effort in circuit optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using pre-trained tabular models encoding circuit primitives
Combining TabPFN v2 foundation model with Direct Expected Improvement
Shifting from hand-crafted priors to systematic physics-informed structure identification
🔎 Similar Papers
No similar papers found.
Z
Zhuohua Liu
School of Integrated Circuit Science and Engineering, Beihang University
K
Kaiqi Huang
School of Mechatronic Control Engineering, Shenzhen University
Q
Qinxin Mei
School of Mechatronic Control Engineering, Shenzhen University
Y
Yuanqi Hu
School of Integrated Circuit Science and Engineering, Beihang University
Wei W. Xing
Wei W. Xing
The University of Sheffield
Bayesian optimizationElectronic design automation (EDA)AI4EDAmachine learning