Invocable APIs derived from NL2SQL datasets for LLM Tool-Calling Evaluation

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of evaluating large language models’ (LLMs) capability to perform natural language-to-API (NL2API) tasks within complex, real-world API ecosystems. We propose the first SQL-data-driven method for automatically generating a scalable, executable NL2API benchmark: leveraging the BIRD-SQL dataset, we construct over 2,500 ground-truth pairs of REST APIs and corresponding natural language instructions. Our framework introduces a novel multi-stage evaluation protocol—covering intent recognition, nested API orchestration, and slot filling—and conducts systematic ablation studies. Experiments across 10 state-of-the-art LLMs reveal low task completion rates (7–47%); ReACT-style interactive prompting improves performance to ~50%. Crucially, direct SQL generation consistently outperforms API invocation, exposing a fundamental performance gap induced by tool abstraction—highlighting persistent limitations in LLMs’ tool selection, composition, and reasoning over heterogeneous API interfaces.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are routinely deployed as agentic systems, with access to tools that interact with live environments to accomplish tasks. In enterprise deployments these systems need to interact with API collections that can be extremely large and complex, often backed by databases. In order to create datasets with such characteristics, we explore how existing NL2SQL (Natural Language to SQL query) datasets can be used to automatically create NL2API datasets. Specifically, this work describes a novel data generation pipeline that exploits the syntax of SQL queries to construct a functionally equivalent sequence of API calls. We apply this pipeline to one of the largest NL2SQL datasets, BIRD-SQL to create a collection of over 2500 APIs that can be served as invocable tools or REST-endpoints. We pair natural language queries from BIRD-SQL to ground-truth API sequences based on this API pool. We use this collection to study the performance of 10 public LLMs and find that all models struggle to determine the right set of tools (consisting of tasks of intent detection, sequencing with nested function calls, and slot-filling). We find that models have extremely low task completion rates (7-47 percent - depending on the dataset) which marginally improves to 50 percent when models are employed as ReACT agents that interact with the live API environment. The best task completion rates are far below what may be required for effective general-use tool-calling agents, suggesting substantial scope for improvement in current state-of-the-art tool-calling LLMs. We also conduct detailed ablation studies, such as assessing the impact of the number of tools available as well as the impact of tool and slot-name obfuscation. We compare the performance of models on the original SQL generation tasks and find that current models are sometimes able to exploit SQL better than APIs.
Problem

Research questions and friction points this paper is trying to address.

Convert NL2SQL datasets to NL2API for LLM tool-calling evaluation
Assess LLM performance in intent detection and API sequencing
Improve low task completion rates in tool-calling LLM agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convert NL2SQL datasets to NL2API datasets
Generate API calls from SQL query syntax
Evaluate LLMs on tool-calling with 2500+ APIs
🔎 Similar Papers
No similar papers found.