🤖 AI Summary
Text-to-SQL faces challenges including weak reasoning over complex databases, constrained generation spaces, and syntactically or semantically incoherent SQL outputs. To address these, we propose a self-rewarding, heuristic dynamic search framework that integrates Monte Carlo Tree Search (MCTS) with schema-aware dataset construction—marking the first application of self-rewarding reinforcement learning principles to SQL generation search. Our method jointly enforces constraints on SQL syntax, semantics, and database schema. It significantly improves few-shot generalization and cross-model transferability, achieving a 10.8% absolute gain in execution accuracy over GPT-4 on the BIRD benchmark. The core contribution lies in unifying structured search, schema-aware learning, and self-supervised reward modeling into a single coherent paradigm—thereby enabling more controllable, interpretable, and robust NL2SQL generation.
📝 Abstract
The Text-to-SQL(Text2SQL) task aims to convert natural language queries into executable SQL queries. Thanks to the application of large language models (LLMs), significant progress has been made in this field. However, challenges such as model scalability, limited generation space, and coherence issues in SQL generation still persist. To address these issues, we propose SQL-o1, a Self-Reward-based heuristic search method designed to enhance the reasoning ability of LLMs in SQL query generation. SQL-o1 combines Monte Carlo Tree Search (MCTS) for heuristic process-level search and constructs a Schema-Aware dataset to help the model better understand database schemas. Extensive experiments on the Bird and Spider datasets demonstrate that SQL-o1 improves execution accuracy by 10.8% on the complex Bird dataset compared to the latest baseline methods, even outperforming GPT-4-based approaches. Additionally, SQL-o1 excels in few-shot learning scenarios and shows strong cross-model transferability. Our code is publicly available at:https://github.com/ShuaiLyu0110/SQL-o1.