Chatty-KG: A Multi-Agent AI System for On-Demand Conversational Question Answering over Knowledge Graphs

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing KGQA systems suffer from weak contextual tracking and poor structural preservation in multi-turn dialogues, while RAG-based approaches face heavy indexing overhead and imprecise query retrieval. To address these challenges, this paper proposes a modular multi-agent KGQA framework that requires no fine-tuning. It orchestrates task-specialized LLM agents to collaboratively perform semantic parsing, dialogue state tracking, and SPARQL query planning, integrating RAG with structured execution for low-latency, high-accuracy QA over dynamic knowledge graphs. Key innovations include: (i) the first application of multi-agent collaboration to KGQA, enabling robust coreference resolution and dynamic knowledge access; and (ii) a lightweight entity-relation linking and structured execution co-optimization strategy. Evaluated on multiple large-scale KGs, our method outperforms state-of-the-art baselines by +12.6% in F1 and +9.3% in P@1 on average. It is compatible with mainstream open-source and commercial LLMs, demonstrating strong scalability and deployment robustness.

Technology Category

Application Category

📝 Abstract
Conversational Question Answering over Knowledge Graphs (KGs) combines the factual grounding of KG-based QA with the interactive nature of dialogue systems. KGs are widely used in enterprise and domain applications to provide structured, evolving, and reliable knowledge. Large language models (LLMs) enable natural and context-aware conversations, but lack direct access to private and dynamic KGs. Retrieval-augmented generation (RAG) systems can retrieve graph content but often serialize structure, struggle with multi-turn context, and require heavy indexing. Traditional KGQA systems preserve structure but typically support only single-turn QA, incur high latency, and struggle with coreference and context tracking. To address these limitations, we propose Chatty-KG, a modular multi-agent system for conversational QA over KGs. Chatty-KG combines RAG-style retrieval with structured execution by generating SPARQL queries through task-specialized LLM agents. These agents collaborate for contextual interpretation, dialogue tracking, entity and relation linking, and efficient query planning, enabling accurate and low-latency translation of natural questions into executable queries. Experiments on large and diverse KGs show that Chatty-KG significantly outperforms state-of-the-art baselines in both single-turn and multi-turn settings, achieving higher F1 and P@1 scores. Its modular design preserves dialogue coherence and supports evolving KGs without fine-tuning or pre-processing. Evaluations with commercial (e.g., GPT-4o, Gemini-2.0) and open-weight (e.g., Phi-4, Gemma 3) LLMs confirm broad compatibility and stable performance. Overall, Chatty-KG unifies conversational flexibility with structured KG grounding, offering a scalable and extensible approach for reliable multi-turn KGQA.
Problem

Research questions and friction points this paper is trying to address.

Enabling conversational QA over private dynamic knowledge graphs
Overcoming limitations of RAG systems in handling graph structures
Addressing multi-turn context tracking challenges in KGQA systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent system for conversational QA over knowledge graphs
Generates SPARQL queries through specialized LLM agents
Combines RAG-style retrieval with structured graph execution
🔎 Similar Papers
No similar papers found.