FIRESPARQL: A LLM-based Framework for SPARQL Query Generation over Scholarly Knowledge Graphs

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address structural inconsistency and semantic inaccuracy in NLQ-to-SPARQL translation over scholarly knowledge graphs (SKGs), this paper proposes a modular framework integrating instruction-tuned large language models (LLMs), retrieval-augmented generation (RAG), and a SPARQL syntax correction mechanism. The framework jointly mitigates entity linking bias, property misuse, and query structure errors. Innovatively, it combines zero-shot, few-shot, and full fine-tuning strategies, augmented by a domain-adapted SPARQL refinement component. Evaluated on the SciQA benchmark, our approach achieves 0.90 ROUGE-L (measuring query text similarity) and 0.85 RelaxedEM (measuring logical equivalence accuracy), substantially outperforming existing baselines. This work delivers a high-precision, robust, end-to-end SPARQL generation solution for SKG-based question answering.

Technology Category

Application Category

📝 Abstract
Question answering over Scholarly Knowledge Graphs (SKGs) remains a challenging task due to the complexity of scholarly content and the intricate structure of these graphs. Large Language Model (LLM) approaches could be used to translate natural language questions (NLQs) into SPARQL queries; however, these LLM-based approaches struggle with SPARQL query generation due to limited exposure to SKG-specific content and the underlying schema. We identified two main types of errors in the LLM-generated SPARQL queries: (i) structural inconsistencies, such as missing or redundant triples in the queries, and (ii) semantic inaccuracies, where incorrect entities or properties are shown in the queries despite a correct query structure. To address these issues, we propose FIRESPARQL, a modular framework that supports fine-tuned LLMs as a core component, with optional context provided via retrieval-augmented generation (RAG) and a SPARQL query correction layer. We evaluate the framework on the SciQA Benchmark using various configurations (zero-shot, zero-shot with RAG, one-shot, fine-tuning, and fine-tuning with RAG) and compare the performance with baseline and state-of-the-art approaches. We measure query accuracy using BLEU and ROUGE metrics, and query result accuracy using relaxed exact match(RelaxedEM), with respect to the gold standards containing the NLQs, SPARQL queries, and the results of the queries. Experimental results demonstrate that fine-tuning achieves the highest overall performance, reaching 0.90 ROUGE-L for query accuracy and 0.85 RelaxedEM for result accuracy on the test set.
Problem

Research questions and friction points this paper is trying to address.

Addresses SPARQL query generation challenges from natural language questions
Corrects structural and semantic errors in LLM-generated SPARQL queries
Improves accuracy over scholarly knowledge graphs using fine-tuned LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based SPARQL query generation
Retrieval-augmented generation for context
SPARQL query correction layer
🔎 Similar Papers
No similar papers found.