Small LLMs with Expert Blocks Are Good Enough for Hyperparamter Tuning

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hyperparameter tuning (HPT) suffers from high computational overhead and excessive reliance on billion-parameter foundation models. Method: This paper proposes an expert-module framework leveraging small language models (SLMs), centered on a Trajectory Context Summarization (TCS) module that deterministically encodes training trajectories into structured, interpretable context representations—enhancing SLMs’ decision-making capability for HPT. Contribution/Results: Deploying phi-4:reasoning-14B and Qwen2.5-Coder-32B locally, the framework achieves effective HPT within only 10 trial iterations. Evaluated across six diverse task categories, it attains average performance just 0.9 percentage points below GPT-4 while reducing computational resource consumption by over an order of magnitude. The approach thus delivers strong efficiency, transparency, and practical deployability without compromising tuning effectiveness.

Technology Category

Application Category

📝 Abstract
Hyper-parameter Tuning (HPT) is a necessary step in machine learning (ML) pipelines but becomes computationally expensive and opaque with larger models. Recently, Large Language Models (LLMs) have been explored for HPT, yet most rely on models exceeding 100 billion parameters. We propose an Expert Block Framework for HPT using Small LLMs. At its core is the Trajectory Context Summarizer (TCS), a deterministic block that transforms raw training trajectories into structured context, enabling small LLMs to analyze optimization progress with reliability comparable to larger models. Using two locally-run LLMs (phi4:reasoning14B and qwen2.5-coder:32B) and a 10-trial budget, our TCS-enabled HPT pipeline achieves average performance within ~0.9 percentage points of GPT-4 across six diverse tasks.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs in hyperparameter tuning
Enabling small LLMs to perform hyperparameter optimization
Achieving comparable results to large models with fewer parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert Block Framework for HPT
Trajectory Context Summarizer transforms training data
Small LLMs achieve comparable performance to large models
🔎 Similar Papers
No similar papers found.
O
Om Naphade
Department of Physics, Indian Institute of Technology Roorkee (IIT Roorkee), Uttarakhand, India.
S
Saksham Bansal
Department of Mechanical Engineering, Indian Institute of Technology Roorkee (IIT Roorkee), Uttarakhand, India.
Parikshit Pareek
Parikshit Pareek
Assistant Professor at Indian Institute of Technology, Roorkee
Machine LearningPower SystemsQuantum Computing for Grid