🤖 AI Summary
To address the high computational and memory overheads of deploying large language models (LLMs) in OLAP databases, this paper proposes IOLM-DB—a query-aware, lightweight LLM-augmented query processing framework. Its core innovation lies in dynamically generating task-specific compact models for each query class, integrating instance-level model compression techniques—including quantization, sparsification, and structural pruning—alongside data-aware sampling, knowledge distillation, and parallel optimization strategies. This approach significantly reduces resource consumption: model size is reduced by 76%, and peak throughput increases by up to 3.31×, enabling high-concurrency execution as well as efficient caching and batch processing. Experimental evaluation demonstrates the feasibility and scalability of native LLM-powered query processing in large-scale OLAP systems on commodity hardware.
📝 Abstract
Large Language Models (LLMs) can enhance analytics systems with powerful data summarization, cleaning, and semantic transformation capabilities. However, deploying LLMs at scale -- processing millions to billions of rows -- remains prohibitively expensive in computation and memory. We present IOLM-DB, a novel system that makes LLM-enhanced database queries practical through query-specific model optimization. Instead of using general-purpose LLMs, IOLM-DB generates lightweight, specialized models tailored to each query's specific needs using representative data samples. IOLM-DB reduces model footprints by up to 76% and increases throughput by up to 3.31$ imes$ while maintaining accuracy through aggressive compression techniques, including quantization, sparsification, and structural pruning. We further show how our approach enables higher parallelism on existing hardware and seamlessly supports caching and batching strategies to reduce overheads. Our prototype demonstrates that leveraging LLM queries inside analytics systems is feasible at scale, opening new possibilities for future OLAP applications.