🤖 AI Summary
This work addresses the limitations of existing SQL rewriting approaches, which either rely on rigid rule-based systems lacking adaptability or depend on large language models (LLMs) that incur high computational costs and privacy concerns. To overcome the scarcity of high-quality domain-specific data for training small language models, the authors propose LASER, a novel framework that integrates Monte Carlo Tree Search (MCTS) with LLM-guided mutation to generate SQL-MCTS—a large-scale corpus of complex, slow queries. They further introduce SQL-GRPO, a group-relative policy optimization algorithm featuring anchored group advantages and a complexity-adaptive dynamic rollout mechanism. Experiments on the Qwen3 small model demonstrate that LASER achieves an exceptional trade-off among execution efficiency, zero-shot transfer capability, and inference overhead, significantly outperforming both rule-based systems and LLM-based alternatives.
📝 Abstract
Query rewriting, the process of transforming queries into semantically equivalent yet more efficient variants, is crucial for database optimization. Existing solutions predominantly rely on either rule-based heuristics or Large Language Models (LLMs). However, traditional rule-based methods lack adaptability, while LLM-based approaches incur prohibitive inference costs and privacy risks. In contrast, Small Language Models (SLMs) present a compelling middle ground, potentially offering both flexibility and efficiency. However, the development of such compact models is severely bottlenecked by the scarcity of high-quality, domain-specific training data. To bridge this gap, we introduce LASER, a data-centric framework designed to empower small models for robust SQL optimization. First, to address the scarcity of existing benchmarks and the limited optimization headroom of generic synthetic queries, we construct SQL-MCTS, a large-scale corpus of complex slow queries. We employ an MCTS-based hybrid expansion strategy that combines rule-guided anti-patterns with LLM mutations to evolve structurally expressive seeds into execution-verified slow variants. Second, to enable the model to autonomously discover latency-aware rewriting patterns, we propose SQL-GRPO, a specialized alignment strategy adapted from Group Relative Policy Optimization. By integrating Anchored Group Advantage to refine advantage estimation and Complexity-Adaptive Dynamic Rollout to efficiently allocate exploration budgets, this approach effectively empowers compact models to master execution-based optimization logic. Implemented on Qwen3 models, LASER significantly outperforms rule-based systems and LLMs in execution efficiency, while exhibiting robust zero-shot transferability with minimal overhead.