The Case for Instance-Optimized LLMs in OLAP Databases

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational and memory overheads of deploying large language models (LLMs) in OLAP databases, this paper proposes IOLM-DB—a query-aware, lightweight LLM-augmented query processing framework. Its core innovation lies in dynamically generating task-specific compact models for each query class, integrating instance-level model compression techniques—including quantization, sparsification, and structural pruning—alongside data-aware sampling, knowledge distillation, and parallel optimization strategies. This approach significantly reduces resource consumption: model size is reduced by 76%, and peak throughput increases by up to 3.31×, enabling high-concurrency execution as well as efficient caching and batch processing. Experimental evaluation demonstrates the feasibility and scalability of native LLM-powered query processing in large-scale OLAP systems on commodity hardware.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) can enhance analytics systems with powerful data summarization, cleaning, and semantic transformation capabilities. However, deploying LLMs at scale -- processing millions to billions of rows -- remains prohibitively expensive in computation and memory. We present IOLM-DB, a novel system that makes LLM-enhanced database queries practical through query-specific model optimization. Instead of using general-purpose LLMs, IOLM-DB generates lightweight, specialized models tailored to each query's specific needs using representative data samples. IOLM-DB reduces model footprints by up to 76% and increases throughput by up to 3.31$ imes$ while maintaining accuracy through aggressive compression techniques, including quantization, sparsification, and structural pruning. We further show how our approach enables higher parallelism on existing hardware and seamlessly supports caching and batching strategies to reduce overheads. Our prototype demonstrates that leveraging LLM queries inside analytics systems is feasible at scale, opening new possibilities for future OLAP applications.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLMs for efficient large-scale database queries
Reducing computational and memory costs of LLM deployment
Enhancing query performance via lightweight specialized models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Query-specific lightweight LLM optimization
Aggressive compression techniques for efficiency
Enhanced parallelism and caching strategies
🔎 Similar Papers
No similar papers found.
B
Bardia Mohammadi
Max Planck Institute for Software Systems, Saarbrücken, Germany
Laurent Bindschaedler
Laurent Bindschaedler
Research Group Leader, MPI-SWS
Big DataDistributed SystemsMachine LearningCloud ComputingSecurity