🤖 AI Summary
Existing Text-to-SQL approaches over-rely on execution accuracy, neglect semantic alignment, and exhibit poor cross-lingual generalization—suffering an average 6-percentage-point drop on non-English languages. To address this, we propose the first semantic alignment–driven reinforcement learning framework for few-shot multilingual Text-to-SQL, integrating Grouped Relative Policy Optimization (GRPO) with a semantic-similarity–based contrastive reward mechanism for end-to-end optimization on LLaMA-3-3B. Evaluated on the seven-language MultiSpider benchmark, our method achieves 88.86% execution accuracy using only 3,000 annotated examples—surpassing zero-shot 8B baselines by 7.43 points—and attains 59.14% semantic accuracy, with the largest gain in Vietnamese (+10.0%). This work pioneers the incorporation of semantic contrastive rewards into few-shot cross-lingual Text-to-SQL optimization, substantially narrowing the semantic gap between generated SQL and natural language intent.
📝 Abstract
Current Text-to-SQL methods are evaluated and only focused on executable queries, overlooking the semantic alignment challenge -- both in terms of the semantic meaning of the query and the correctness of the execution results. Even execution accuracy itself shows significant drops when moving from English to other languages, with an average decline of 6 percentage points across non-English languages. We address these challenges by presenting a new framework that combines Group Relative Policy Optimization (GRPO) within a multilingual contrastive reward signal to enhance both task efficiency and semantic accuracy in Text-to-SQL systems in cross-lingual scenarios. Our method teaches models to obtain better correspondence between SQL generation and user intent by combining a reward signal based on semantic similarity. On the seven-language MultiSpider dataset, fine-tuning the LLaMA-3-3B model with GRPO improved the execution accuracy up to 87.4 percent (+26 pp over zero-shot) and semantic accuracy up to 52.29 percent (+32.86 pp). Adding our contrastive reward signal in the GRPO framework further improved the average semantic accuracy to 59.14 percent (+6.85 pp, up to +10 pp for Vietnamese). Our experiments showcase that a smaller, parameter-efficient 3B LLaMA model fine-tuned with our contrastive reward signal outperforms a much larger zero-shot 8B LLaMA model, with an uplift of 7.43 pp in execution accuracy (from 81.43 percent on the 8B model to 88.86 percent on the 3B model), and nearly matches its semantic accuracy (59.14 percent vs. 68.57 percent) -- all using just 3,000 reinforcement learning training examples. These results demonstrate how we can improve the performance of Text-to-SQL systems with contrastive rewards for directed semantic alignment, without requiring large-scale training datasets.