Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models

📅 2024-02-23
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of parsing complex natural language questions into executable logical forms for knowledge base question answering (KBQA) in low-resource settings, this paper proposes a large language model (LLM)-based interactive reasoning framework that engages in multi-turn dialogue with the knowledge base. Our method introduces: (1) three generic KB interaction APIs enabling dynamic querying and state-aware feedback; (2) a category-specific reasoning exemplar prompting mechanism, integrated with few-shot in-context learning to achieve domain adaptation; and (3) human-in-the-loop support for iterative refinement, ensuring interpretability and debuggability of generated logical forms. Evaluated on four standard benchmarks—WebQuestionsSP, ComplexWebQuestions, KQA Pro, and MetaQA—the framework achieves state-of-the-art performance using only a few demonstration examples. Ablation studies further confirm that human intervention significantly improves output correctness and robustness.

Technology Category

Application Category

📝 Abstract
This study explores the realm of knowledge base question answering (KBQA). KBQA is considered a challenging task, particularly in parsing intricate questions into executable logical forms. Traditional semantic parsing (SP)-based methods require extensive data annotations, which result in significant costs. Recently, the advent of few-shot in-context learning, powered by large language models (LLMs), has showcased promising capabilities. However, fully leveraging LLMs to parse questions into logical forms in low-resource scenarios poses a substantial challenge. To tackle these hurdles, we introduce Interactive-KBQA, a framework designed to generate logical forms through direct interaction with knowledge bases (KBs). Within this framework, we have developed three generic APIs for KB interaction. For each category of complex question, we devised exemplars to guide LLMs through the reasoning processes. Our method achieves competitive results on the WebQuestionsSP, ComplexWebQuestions, KQA Pro, and MetaQA datasets with a minimal number of examples (shots). Importantly, our approach supports manual intervention, allowing for the iterative refinement of LLM outputs. By annotating a dataset with step-wise reasoning processes, we showcase our model's adaptability and highlight its potential for contributing significant enhancements to the field.
Problem

Research questions and friction points this paper is trying to address.

Parsing complex questions into logical forms using LLMs.
Reducing data annotation costs in KBQA with few-shot learning.
Enhancing KBQA accuracy with interactive, iterative refinement methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive-KBQA framework for KBQA
Three generic APIs for KB interaction
Exemplars guide LLMs in reasoning processes
🔎 Similar Papers
No similar papers found.