๐ค AI Summary
This study addresses the challenge of low active learning efficiency in software engineering (SE) due to scarce labeled data in multi-objective SE tasks. We systematically investigate, for the first time, the efficacy of large language models (LLMs) as warm-start generators for active learning. Empirical evaluation across 49 SE tasks compares LLMs against Gaussian process models (GPMs) and tree-structured Parzen estimators (TPEs) in generating high-quality initial query points. Results show that LLMs significantly accelerate convergence and improve final performance on low- and medium-dimensional tasksโachieving an average speedup of 37% over state-of-the-art Bayesian methods. However, GPMs remain superior on high-dimensional tasks, revealing a clear dimensional boundary for LLM-based warm-start applicability. This work establishes the first large-scale empirical benchmark for LLM-augmented active learning in SE and provides interpretable, dimension-aware guidelines for practical deployment.
๐ Abstract
When SE data is scarce,"active learners"use models learned from tiny samples of the data to find the next most informative example to label. In this way, effective models can be generated using very little data. For multi-objective software engineering (SE) tasks, active learning can benefit from an effective set of initial guesses (also known as"warm starts"). This paper explores the use of Large Language Models (LLMs) for creating warm-starts. Those results are compared against Gaussian Process Models and Tree of Parzen Estimators. For 49 SE tasks, LLM-generated warm starts significantly improved the performance of low- and medium-dimensional tasks. However, LLM effectiveness diminishes in high-dimensional problems, where Bayesian methods like Gaussian Process Models perform best.