🤖 AI Summary
Database tuning faces two key challenges: an enormous configuration space (on the order of trillions) and dynamic shifts in workloads and schemas, leading to poor adaptability. This paper proposes Booster, the first framework to leverage large language models (LLMs) for modeling historical query–configuration contexts, enabling cross-workload and cross-schema knowledge transfer. Booster employs prompt engineering to extract query-level configuration recommendations and integrates beam search to synthesize globally optimal configurations. By synergistically combining cost modeling, machine learning, and LLM-based reasoning, Booster achieves up to 74% performance improvement over state-of-the-art baselines across diverse OLAP workloads, while accelerating convergence by 4.7×. Crucially, it significantly enhances the tuner’s self-adaptation to evolving environments—demonstrating robust generalization across unseen workloads and schema changes.
📝 Abstract
Tuning database management systems (DBMSs) is challenging due to trillions of possible configurations and evolving workloads. Recent advances in tuning have led to breakthroughs in optimizing over the possible configurations. However, due to their design and inability to leverage query-level historical insights, existing automated tuners struggle to adapt and re-optimize the DBMS when the environment changes (e.g., workload drift, schema transfer).
This paper presents the Booster framework that assists existing tuners in adapting to environment changes (e.g., drift, cross-schema transfer). Booster structures historical artifacts into query-configuration contexts, prompts large language models (LLMs) to suggest configurations for each query based on relevant contexts, and then composes the query-level suggestions into a holistic configuration with beam search. With multiple OLAP workloads, we evaluate Booster's ability to assist different state-of-the-art tuners (e.g., cost-/machine learning-/LLM-based) in adapting to environment changes. By composing recommendations derived from query-level insights, Booster assists tuners in discovering configurations that are up to 74% better and in up to 4.7x less time than the alternative approach of continuing to tune from historical configurations.