π€ AI Summary
This work addresses the inefficiency of existing large language model (LLM)-driven feature engineering approaches, which often treat LLMs as black-box optimizers and lack effective evaluation of feature transformation utility, leading to redundant and ineffective operations. To overcome this, the authors propose a human-in-the-loop feature engineering framework that decouples operation generation from selection: candidate transformations are generated by an LLM and then filtered using Bayesian utility modeling with uncertainty estimation. Crucially, human expert preference feedback is incorporated early in the process to guide exploration. By integrating active learning with interactive human feedback, the method significantly enhances feature engineering performance on both synthetic and real-world datasets while substantially reducing usersβ cognitive load across diverse tabular data settings.
π Abstract
Large language models (LLMs) are increasingly used to automate feature engineering in tabular learning. Given task-specific information, LLMs can propose diverse feature transformation operations to enhance downstream model performance. However, current approaches typically assign the LLM as a black-box optimizer, responsible for both proposing and selecting operations based solely on its internal heuristics, which often lack calibrated estimations of operation utility and consequently lead to repeated exploration of low-yield operations without a principled strategy for prioritizing promising directions. In this paper, we propose a human-LLM collaborative feature engineering framework for tabular learning. We begin by decoupling the transformation operation proposal and selection processes, where LLMs are used solely to generate operation candidates, while the selection is guided by explicitly modeling the utility and uncertainty of each proposed operation. Since accurate utility estimation can be difficult especially in the early rounds of feature engineering, we design a mechanism within the framework that selectively elicits and incorporates human expert preference feedback, comparing which operations are more promising, into the selection process to help identify more effective operations. Our evaluations on both the synthetic study and the real user study demonstrate that the proposed framework improves feature engineering performance across a variety of tabular datasets and reduces users'cognitive load during the feature engineering process.