This is Going to Sound Crazy, But What If We Used Large Language Models to Boost Automatic Database Tuning Algorithms By Leveraging Prior History? We Will Find Better Configurations More Quickly Than Retraining From Scratch!

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Database tuning faces two key challenges: an enormous configuration space (on the order of trillions) and dynamic shifts in workloads and schemas, leading to poor adaptability. This paper proposes Booster, the first framework to leverage large language models (LLMs) for modeling historical query–configuration contexts, enabling cross-workload and cross-schema knowledge transfer. Booster employs prompt engineering to extract query-level configuration recommendations and integrates beam search to synthesize globally optimal configurations. By synergistically combining cost modeling, machine learning, and LLM-based reasoning, Booster achieves up to 74% performance improvement over state-of-the-art baselines across diverse OLAP workloads, while accelerating convergence by 4.7×. Crucially, it significantly enhances the tuner’s self-adaptation to evolving environments—demonstrating robust generalization across unseen workloads and schema changes.

Technology Category

Application Category

📝 Abstract
Tuning database management systems (DBMSs) is challenging due to trillions of possible configurations and evolving workloads. Recent advances in tuning have led to breakthroughs in optimizing over the possible configurations. However, due to their design and inability to leverage query-level historical insights, existing automated tuners struggle to adapt and re-optimize the DBMS when the environment changes (e.g., workload drift, schema transfer). This paper presents the Booster framework that assists existing tuners in adapting to environment changes (e.g., drift, cross-schema transfer). Booster structures historical artifacts into query-configuration contexts, prompts large language models (LLMs) to suggest configurations for each query based on relevant contexts, and then composes the query-level suggestions into a holistic configuration with beam search. With multiple OLAP workloads, we evaluate Booster's ability to assist different state-of-the-art tuners (e.g., cost-/machine learning-/LLM-based) in adapting to environment changes. By composing recommendations derived from query-level insights, Booster assists tuners in discovering configurations that are up to 74% better and in up to 4.7x less time than the alternative approach of continuing to tune from historical configurations.
Problem

Research questions and friction points this paper is trying to address.

Enhancing database tuning adaptation to workload changes
Leveraging LLMs for query-level configuration recommendations
Accelerating optimal configuration discovery using historical data
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate configurations from historical query contexts
Beam search composes query-level suggestions into holistic configurations
Framework assists existing tuners to adapt to environment changes
🔎 Similar Papers
No similar papers found.