Semantic Caching for OLAP via LLM-Based Query Canonicalization (Extended Version)

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first semantic caching mechanism tailored for star-schema OLAP workloads that enables safe reuse of semantically equivalent queries across diverse interfaces—including BI tools, notebooks, and natural language systems—where traditional caches based on SQL text or AST keys fail. The approach leverages large language models to normalize both SQL and natural language queries into a unified “OLAP intent signature,” augmented with rigorous schema validation and confidence gating to guarantee zero false cache hits. To broaden applicability, it introduces two fidelity-preserving inference strategies: roll-up and filter-down. Evaluated on 1,395 queries from TPC-DS, SSB, and NYC TLC benchmarks, the method achieves an 82% cache hit rate—substantially outperforming text-based (28%) and AST-based (56%) baselines—and doubles the hit rate for hierarchical queries.

Technology Category

Application Category

📝 Abstract
Analytical workloads exhibit substantial semantic repetition, yet most production caches key entries by SQL surface form (text or AST), fragmenting reuse across BI tools, notebooks, and NL interfaces. We introduce a safety-first middleware cache for dashboard-style OLAP over star schemas that canonicalizes both SQL and NL into a unified key space -- the OLAP Intent Signature -- capturing measures, grouping levels, filters, and time windows. Reuse requires exact intent matches under strict schema validation and confidence-gated NL acceptance; two correctness-preserving derivations (roll-up, filter-down) extend coverage without approximate matching. Across TPC-DS, SSB, and NYC TLC (1,395 queries), we achieve 82% hit rate versus 28% (text) and 56% (AST) with zero false hits; derivations double hit rate on hierarchical queries.
Problem

Research questions and friction points this paper is trying to address.

Semantic Caching
OLAP
Query Canonicalization
Cache Reuse
Natural Language Queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Caching
Query Canonicalization
OLAP Intent Signature
LLM-based NL Understanding
Correctness-preserving Derivation
🔎 Similar Papers
No similar papers found.