🤖 AI Summary
Existing query optimizers rely on manually crafted, extensive rule sets, limiting scalability to multimodal scenarios; moreover, no prior work has explored the potential of large language models (LLMs) for query optimization. This paper introduces LaPuda—the first LLM-based, strategy-driven multimodal query optimizer—departing from traditional rule enumeration by leveraging semantic understanding to autonomously generate efficient execution plans. Its core contributions are: (1) a strategy-guided LLM optimization paradigm that encodes domain knowledge into abstract, executable policies; (2) the Guided Cost Descent (GCD) algorithm, which ensures monotonic cost reduction and correctness of optimization trajectories; and (3) native support for multimodal query modeling and end-to-end evaluation. Experiments demonstrate that LaPuda accelerates query execution by 1–3× over multimodal workloads, significantly outperforming both rule-based and cost-model baselines while maintaining robust performance across diverse queries.
📝 Abstract
Large language model (LLM) has marked a pivotal moment in the field of machine learning and deep learning. Recently its capability for query planning has been investigated, including both single-modal and multi-modal queries. However, there is no work on the query optimization capability of LLM. As a critical (or could even be the most important) step that significantly impacts the execution performance of the query plan, such analysis and attempts should not be missed. From another aspect, existing query optimizers are usually rule-based or rule-based + cost-based, i.e., they are dependent on manually created rules to complete the query plan rewrite/transformation. Given the fact that modern optimizers include hundreds to thousands of rules, designing a multi-modal query optimizer following a similar way is significantly time-consuming since we will have to enumerate as many multi-modal optimization rules as possible, which has not been well addressed today. In this paper, we investigate the query optimization ability of LLM and use LLM to design LaPuda, a novel LLM and Policy based multi-modal query optimizer. Instead of enumerating specific and detailed rules, LaPuda only needs a few abstract policies to guide LLM in the optimization, by which much time and human effort are saved. Furthermore, to prevent LLM from making mistakes or negative optimization, we borrow the idea of gradient descent and propose a guided cost descent (GCD) algorithm to perform the optimization, such that the optimization can be kept in the correct direction. In our evaluation, our methods consistently outperform the baselines in most cases. For example, the optimized plans generated by our methods result in 1~3x higher execution speed than those by the baselines.