🤖 AI Summary
Multilingual Text-to-SQL research is hindered by English-centric benchmarks and a lack of evaluation frameworks capturing real-world linguistic diversity. To address this, we introduce MultiSpider 2.0—the first multilingual Text-to-SQL benchmark covering eight typologically diverse languages while preserving complex SQL constructs (e.g., nested queries, multi-table joins). Building upon it, we propose a collaborative language-agent framework that integrates state-of-the-art LLMs (e.g., DeepSeek-R1, OpenAI o1) to jointly perform cross-lingual semantic alignment and SQL refinement. Experimental results reveal a severe performance gap: mainstream LLMs achieve only 4% execution accuracy on MultiSpider 2.0—dramatically lower than their 60% on English Spider—highlighting a critical bottleneck in multilingual database understanding. Our framework lifts accuracy to 15%, marking the first systematic identification and mitigation of this limitation.
📝 Abstract
Text-to-SQL enables natural access to databases, yet most benchmarks are English-only, limiting multilingual progress. We introduce MultiSpider 2.0, extending Spider 2.0 to eight languages (English, German, French, Spanish, Portuguese, Japanese, Chinese, Vietnamese). It preserves Spider 2.0's structural difficulty while adding linguistic and dialectal variability, demanding deeper reasoning for complex SQL. On this benchmark, state-of-the-art LLMs (such as DeepSeek-R1 and OpenAI o1) reach only 4% execution accuracy when relying on intrinsic reasoning, versus 60% on MultiSpider 1.0. Therefore, we provide a collaboration-driven language agents baseline that iteratively refines queries, improving accuracy to 15%. These results reveal a substantial multilingual gap and motivate methods that are robust across languages and ready for real-world enterprise deployment. Our benchmark is available at https://github.com/phkhanhtrinh23/Multilingual_Text_to_SQL.