Scaling Text2SQL via LLM-efficient Schema Filtering with Functional Dependency Graph Rerankers

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address Text-to-SQL failure under large-scale schemas (hundreds of tables, tens of thousands of columns) caused by LLM context limitations, this paper proposes a query-aware lightweight schema compression framework. First, it employs an LLM to generate query-aware column encodings. Second, it constructs a functional dependency graph and introduces a lightweight Graph Transformer—novel for modeling structural column relationships—to re-rank columns. Third, it selects the minimal relevant subgraph via Steiner tree–inspired heuristic search under connectivity constraints. The method achieves near-perfect recall (~100% on Spider 2.0) while surpassing state-of-the-art baselines (e.g., CodeS, SchemaExP) in precision. It scales to schemas exceeding 23,000 columns, with median inference latency under one second. This significantly enhances the practicality and scalability of Text-to-SQL for industrial-scale databases.

Technology Category

Application Category

📝 Abstract
Most modern Text2SQL systems prompt large language models (LLMs) with entire schemas -- mostly column information -- alongside the user's question. While effective on small databases, this approach fails on real-world schemas that exceed LLM context limits, even for commercial models. The recent Spider 2.0 benchmark exemplifies this with hundreds of tables and tens of thousands of columns, where existing systems often break. Current mitigations either rely on costly multi-step prompting pipelines or filter columns by ranking them against user's question independently, ignoring inter-column structure. To scale existing systems, we introduce oolname, an open-source, LLM-efficient schema filtering framework that compacts Text2SQL prompts by (i) ranking columns with a query-aware LLM encoder enriched with values and metadata, (ii) reranking inter-connected columns via a lightweight graph transformer over functional dependencies, and (iii) selecting a connectivity-preserving sub-schema with a Steiner-tree heuristic. Experiments on real datasets show that oolname achieves near-perfect recall and higher precision than CodeS, SchemaExP, Qwen rerankers, and embedding retrievers, while maintaining sub-second median latency and scaling to schemas with 23,000+ columns. Our source code is available at https://github.com/thanhdath/grast-sql.
Problem

Research questions and friction points this paper is trying to address.

Scaling Text2SQL systems to handle large real-world database schemas
Reducing prompt size by filtering irrelevant columns while preserving structural dependencies
Overcoming LLM context limits when processing schemas with thousands of columns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses query-aware LLM encoder for column ranking
Reranks columns via graph transformer on dependencies
Selects sub-schema with Steiner-tree heuristic
🔎 Similar Papers
No similar papers found.