Extractive Schema Linking for Text-to-SQL

📅 2025-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address context limitations, noise sensitivity, and inefficient schema linking in Text-to-SQL for large-scale database schemas, this paper introduces the first extraction-based schema linking paradigm tailored for decoder-only large language models (LLMs). The method formulates schema linking as a token-level classification task, integrating structural encoding of the database schema with question-aware attention—eliminating the need for autoregressive decoding or graph neural networks. This design enables fine-grained precision-recall trade-offs. Evaluated on the Spider and BDSQL benchmarks, our approach achieves an 8.2% absolute improvement in schema linking accuracy over state-of-the-art generative methods, while reducing inference latency by 3.1×. The proposed framework thus significantly enhances both efficiency and accuracy in large-schema Text-to-SQL scenarios.

Technology Category

Application Category

📝 Abstract
Text-to-SQL is emerging as a practical interface for real world databases. The dominant paradigm for Text-to-SQL is cross-database or schema-independent, supporting application schemas unseen during training. The schema of a database defines the tables, columns, column types and foreign key connections between tables. Real world schemas can be large, containing hundreds of columns, but for any particular query only a small fraction will be relevant. Placing the entire schema in the prompt for an LLM can be impossible for models with smaller token windows and expensive even when the context window is large enough to allow it. Even apart from computational considerations, the accuracy of the model can be improved by focusing the SQL generation on only the relevant portion of the database. Schema linking identifies the portion of the database schema useful for the question. Previous work on schema linking has used graph neural networks, generative LLMs, and cross encoder classifiers. We introduce a new approach to adapt decoder-only LLMs to schema linking that is both computationally more efficient and more accurate than the generative approach. Additionally our extractive approach permits fine-grained control over the precision-recall trade-off for schema linking.
Problem

Research questions and friction points this paper is trying to address.

Text-to-SQL
Database Optimization
Information Extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-to-SQL
Extractive Schema Linking
Decoder-Only Models
🔎 Similar Papers
No similar papers found.