🤖 AI Summary
Analog circuit optimization is typically treated as a black-box search, yet device physics—governed by exponential laws, rational functions, and region transitions—imposes strong structured priors on performance mappings. Conventional Gaussian process surrogates, assuming global smoothness and stationarity, fail to accurately model highly nonlinear circuit responses under small-sample regimes (50–100 evaluations).
Method: We propose the Circuit Prior Network (CPN), the first method to encode physical laws as learnable, structured priors—eliminating manual modeling. CPN builds a discrete posterior based on TabPFN v2 and introduces Direct Expected Improvement (DEI) for exact acquisition computation under non-Gaussian, non-stationary settings.
Contribution/Results: On six circuit optimization tasks, CPN outperforms 25 baselines: achieving R² = 0.99 in small-sample regression, improving final performance by 1.05–3.81×, and accelerating convergence by 3.34–11.89×. This advances analog circuit optimization from empirical black-box tuning toward systematic, physics-informed structural identification.
📝 Abstract
Analog circuit optimization is typically framed as black-box search over arbitrary smooth functions, yet device physics constrains performance mappings to structured families: exponential device laws, rational transfer functions, and regime-dependent dynamics. Off-the-shelf Gaussian-process surrogates impose globally smooth, stationary priors that are misaligned with these regime-switching primitives and can severely misfit highly nonlinear circuits at realistic sample sizes (50--100 evaluations). We demonstrate that pre-trained tabular models encoding these primitives enable reliable optimization without per-circuit engineering. Circuit Prior Network (CPN) combines a tabular foundation model (TabPFN v2) with Direct Expected Improvement (DEI), computing expected improvement exactly under discrete posteriors rather than Gaussian approximations. Across 6 circuits and 25 baselines, structure-matched priors achieve $R^2 approx 0.99$ in small-sample regimes where GP-Matérn attains only $R^2 = 0.16$ on Bandgap, deliver $1.05$--$3.81 imes$ higher FoM with $3.34$--$11.89 imes$ fewer iterations, and suggest a shift from hand-crafting models as priors toward systematic physics-informed structure identification. Our code will be made publicly available upon paper acceptance.