David vs. Goliath: Can Small Models Win Big with Agentic AI in Hardware Design?

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) incur prohibitive computational overhead and poor energy efficiency in hardware design, challenging their practical deployment. Method: This paper challenges the “bigger is better” paradigm by proposing a Small Language Model (SLM) + Agent AI collaborative framework tailored for Verilog design. The framework employs task decomposition, context-aware prompting, multi-turn self-correction, and iterative feedback to enable continual learning and collaborative reasoning with lightweight models. Contribution/Results: Evaluated on the NVIDIA CVDP benchmark, our approach achieves design quality comparable to state-of-the-art LLMs while reducing inference cost substantially. Experimental results demonstrate that integrating lightweight models with structured agent workflows balances performance, energy efficiency, and scalability in complex chip design tasks—paving a novel pathway toward green AI–driven adaptive hardware design.

Technology Category

Application Category

📝 Abstract
Large Language Model(LLM) inference demands massive compute and energy, making domain-specific tasks expensive and unsustainable. As foundation models keep scaling, we ask: Is bigger always better for hardware design? Our work tests this by evaluating Small Language Models coupled with a curated agentic AI framework on NVIDIA's Comprehensive Verilog Design Problems(CVDP) benchmark. Results show that agentic workflows: through task decomposition, iterative feedback, and correction - not only unlock near-LLM performance at a fraction of the cost but also create learning opportunities for agents, paving the way for efficient, adaptive solutions in complex design tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating small models with agentic AI for hardware design efficiency
Reducing computational cost and energy in domain-specific LLM tasks
Testing if agentic workflows can match large model performance affordably
Innovation

Methods, ideas, or system contributions that make the work stand out.

Small models with agentic AI framework
Task decomposition and iterative feedback
Near-LLM performance at lower cost
🔎 Similar Papers
No similar papers found.
S
Shashwat Shankar
Indian Institute of Technology, Guwahati, India
S
Subhranshu Pandey
Indian Institute of Technology, Guwahati, India
I
Innocent Dengkhw Mochahari
Indian Institute of Technology, Guwahati, India
B
Bhabesh Mali
Indian Institute of Technology, Guwahati, India
Animesh Basak Chowdhury
Animesh Basak Chowdhury
NXP USA, Inc.
Sukanta Bhattacharjee
Sukanta Bhattacharjee
Indian Institute of Technology, Guwahati, India
Chandan Karfa
Chandan Karfa
Indian Institute of Technology Guwahati
EDAML for EDAFormal VerificationHigh-level SynthesisHardware Security