Background
Analog circuit sizing is nearly NP-hard grunt work in practice: a simple bandgap + bias combination involves over a dozen continuous parameters, each coupled across PVT, process corners, and Monte Carlo. Manual engineering tuning is both slow and difficult to reproduce. Commercial EDA tools (Cadence Cerebrus, Synopsys/MunEDA WiCkeD, Solido Design Automation) address part of this, but licenses are concentrated in large-company flow contracts, remaining inaccessible to students and small teams.
analog-sizing-ml attempts to fill this gap: combining BoTorch’s Bayesian optimization with gm/Id physical priors into a lightweight tool that runs Spectre remote simulation on commodity servers and accepts YAML design intent declarations. The goal is enabling a graduate student to integrate it into their project in hours rather than months of pipeline setup.
Methodology
The tool comprises three layers. At the bottom, BoTorch’s qExpectedHypervolumeImprovement multi-objective sampler supports Pareto exploration under simulation budget constraints; users simply declare objectives as vectors (e.g., vref_dev, spread, power) without pre-deciding weights. The middle layer is the gm/Id prior module (gmid_prior.py): given each branch’s (gm/Id, I_target, L_target), reverse-query the local TSMC55 gm/Id lookup table to find feasible W intervals, seeding BO’s initialization points directly in the physically reasonable region—more than doubling in-spec rates compared to pure LHS.
The top layer provides YAML schema and CLI. All specs, variable bounds, objectives/constraints, and warm-start modes are declared in configuration, with simulation via existing Spectre black-box: netlist .param substitution and PSF parsing. The parallelization pipeline reuses server_capacity.probe() to adaptively decide ThreadPool size and spectre +mt based on server load, saturating 16 cores when idle and auto-backing off under load. Report generation, decision trees, and Pareto plots are one-command outputs from python -m sizing_opt run.
Current Progress
- v0.2.0 released, covering BoTorch qEHVI multi-objective + gm/Id prior + CSV warm-start three initialization modes
- BackScatter BGR55 + TopBias55 joint demo achieved 48% in-spec rate (80 evals) versus 17% from legacy sklearn ParEGO baseline
- Three-way comparison measured (vanilla qEHVI / gm/Id prior / penalty refinement): v2c penalty version reduced best vref_dev from 2.66 mV to 1.51 mV (43% decrease) while maintaining 48% in-spec
- gm/Id prior enables BO to start in the feasible region, achieving 20 percentage points higher in-spec versus pure LHS (53% vs 33%), directly saving 30%+ wall-time in minute-scale slow simulations
- Documentation (
docs/how_to_use.md5-step integration), CLI sub-commands (run / check / report), and automated report generation
Next Steps
- Re-evaluate constraint GP on tasks where objective and constraint are weakly correlated (currently disabled because strong correlation pulls search into violation regions)
- Integrate RTL signoff oracle for hybrid analog/digital joint sizing, covering PHY blocks in mixed-signal SoCs
- Improve I_target auto-calibration workflow for gm/Id prior to prevent users misconfiguring branch currents causing dimension collapse
- Open-source release and English documentation