🤖 AI Summary
Hyperparameter tuning (HPT) suffers from high computational overhead and excessive reliance on billion-parameter foundation models. Method: This paper proposes an expert-module framework leveraging small language models (SLMs), centered on a Trajectory Context Summarization (TCS) module that deterministically encodes training trajectories into structured, interpretable context representations—enhancing SLMs’ decision-making capability for HPT. Contribution/Results: Deploying phi-4:reasoning-14B and Qwen2.5-Coder-32B locally, the framework achieves effective HPT within only 10 trial iterations. Evaluated across six diverse task categories, it attains average performance just 0.9 percentage points below GPT-4 while reducing computational resource consumption by over an order of magnitude. The approach thus delivers strong efficiency, transparency, and practical deployability without compromising tuning effectiveness.
📝 Abstract
Hyper-parameter Tuning (HPT) is a necessary step in machine learning (ML) pipelines but becomes computationally expensive and opaque with larger models. Recently, Large Language Models (LLMs) have been explored for HPT, yet most rely on models exceeding 100 billion parameters. We propose an Expert Block Framework for HPT using Small LLMs. At its core is the Trajectory Context Summarizer (TCS), a deterministic block that transforms raw training trajectories into structured context, enabling small LLMs to analyze optimization progress with reliability comparable to larger models. Using two locally-run LLMs (phi4:reasoning14B and qwen2.5-coder:32B) and a 10-trial budget, our TCS-enabled HPT pipeline achieves average performance within ~0.9 percentage points of GPT-4 across six diverse tasks.