π€ AI Summary
To address instability, poor interpretability, and operational complexity in index benefit estimation for database indexing tuning, this paper proposes Beautyβthe first uncertainty-aware framework. Methodologically, Beauty innovatively integrates uncertainty quantification deeply into learned index benefit prediction; jointly leverages an AutoEncoder and Monte Carlo Dropout to model structured uncertainty; and dynamically triggers the what-if query optimizer for calibration based on predicted uncertainty. Extensive experiments across 16 models and six real-world datasets demonstrate that Beauty significantly outperforms state-of-the-art uncertainty quantification methods: it completely eliminates worst-case estimation scenarios, increases the frequency of optimal estimates by over threefold, and simultaneously achieves high accuracy, robustness, and interpretability.
π Abstract
Index tuning is crucial for optimizing database performance by selecting optimal indexes based on workload. The key to this process lies in an accurate and efficient benefit estimator. Traditional methods relying on what-if tools often suffer from inefficiency and inaccuracy. In contrast, learning-based models provide a promising alternative but face challenges such as instability, lack of interpretability, and complex management. To overcome these limitations, we adopt a novel approach: quantifying the uncertainty in learning-based models' results, thereby combining the strengths of both traditional and learning-based methods for reliable index tuning. We propose Beauty, the first uncertainty-aware framework that enhances learning-based models with uncertainty quantification and uses what-if tools as a complementary mechanism to improve reliability and reduce management complexity. Specifically, we introduce a novel method that combines AutoEncoder and Monte Carlo Dropout to jointly quantify uncertainty, tailored to the characteristics of benefit estimation tasks. In experiments involving sixteen models, our approach outperformed existing uncertainty quantification methods in the majority of cases. We also conducted index tuning tests on six datasets. By applying the Beauty framework, we eliminated worst-case scenarios and more than tripled the occurrence of best-case scenarios.