AutoIndexer: A Reinforcement Learning-Enhanced Index Advisor Towards Scaling Workloads

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For large-scale analytical workloads, index selection faces scalability challenges in reinforcement learning (RL) due to exponential action-space growth and high trial-and-error costs. This paper proposes an end-to-end index recommendation framework that integrates workload compression with a customized deep RL architecture. Its core innovations are: (1) the first integration of lightweight workload compression with incremental policy optimization tailored for index decisions, enabling effective pruning of high-dimensional action spaces; and (2) joint incorporation of query cost modeling and online execution feedback to enhance recommendation accuracy and responsiveness. Experiments show that, compared to the no-index baseline, the framework reduces end-to-end query latency by up to 95%; relative to state-of-the-art RL-based methods, it further reduces average workload cost by 20% and cuts tuning time by over 50%.

Technology Category

Application Category

📝 Abstract
Efficiently selecting indexes is fundamental to database performance optimization, particularly for systems handling large-scale analytical workloads. While deep reinforcement learning (DRL) has shown promise in automating index selection through its ability to learn from experience, few works address how these RL-based index advisors can adapt to scaling workloads due to exponentially growing action spaces and heavy trial and error. To address these challenges, we introduce AutoIndexer, a framework that combines workload compression, query optimization, and specialized RL models to scale index selection effectively. By operating on compressed workloads, AutoIndexer substantially lowers search complexity without sacrificing much index quality. Extensive evaluations show that it reduces end-to-end query execution time by up to 95% versus non-indexed baselines. On average, it outperforms state-of-the-art RL-based index advisors by approximately 20% in workload cost savings while cutting tuning time by over 50%. These results affirm AutoIndexer's practicality for large and diverse workloads.
Problem

Research questions and friction points this paper is trying to address.

Efficiently selecting indexes for large-scale analytical workloads
Adapting RL-based index advisors to scaling workloads
Reducing search complexity without sacrificing index quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines workload compression and RL models
Reduces search complexity with compressed workloads
Cuts tuning time by over 50%
🔎 Similar Papers
No similar papers found.