Affordable AI Assistants with Knowledge Graph of Thoughts

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited cross-domain generalization, high operational costs, and low success rates of large language models (LLMs) on complex tasks—particularly on benchmarks like GAIA—this paper proposes Knowledge Graph-oriented Thinking (KGoT), a novel architectural framework. KGoT introduces a dynamic, knowledge-graph-driven reasoning paradigm that tightly integrates LLM inference with real-time construction and iterative refinement of structured knowledge graphs. It leverages multi-tool orchestration—including web crawling, Python execution, and mathematical solvers—to automatically extract, validate, and enhance task-specific knowledge. This design significantly improves reasoning interpretability, cumulative knowledge retention, and model efficiency. Experiments demonstrate substantial gains: on the GAIA benchmark, KGoT achieves a 29% higher task success rate than GPT-4o-mini and reduces computational cost by 36× compared to GPT-4o. Furthermore, it boosts success rates by 36% on Qwen2.5-32B and 37.5% on DeepSeek-R1-70B, underscoring its effectiveness across diverse model scales.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are revolutionizing the development of AI assistants capable of performing diverse tasks across domains. However, current state-of-the-art LLM-driven agents face significant challenges, including high operational costs and limited success rates on complex benchmarks like GAIA. To address these issues, we propose the Knowledge Graph of Thoughts (KGoT), an innovative AI assistant architecture that integrates LLM reasoning with dynamically constructed knowledge graphs (KGs). KGoT extracts and structures task-relevant knowledge into a dynamic KG representation, iteratively enhanced through external tools such as math solvers, web crawlers, and Python scripts. Such structured representation of task-relevant knowledge enables low-cost models to solve complex tasks effectively. For example, KGoT achieves a 29% improvement in task success rates on the GAIA benchmark compared to Hugging Face Agents with GPT-4o mini, while reducing costs by over 36x compared to GPT-4o. Improvements for recent reasoning models are similar, e.g., 36% and 37.5% for Qwen2.5-32B and Deepseek-R1-70B, respectively. KGoT offers a scalable, affordable, and high-performing solution for AI assistants.
Problem

Research questions and friction points this paper is trying to address.

Reducing high operational costs of LLM-driven AI assistants
Improving success rates on complex benchmarks like GAIA
Integrating LLM reasoning with dynamic knowledge graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLM reasoning with dynamic knowledge graphs
Uses external tools to iteratively enhance knowledge graphs
Enables low-cost models to solve complex tasks effectively
🔎 Similar Papers
No similar papers found.
Maciej Besta
Maciej Besta
ETH Zurich
Graph ComputationsEffective & Efficient AISparse ComputationsHigh-Performance Computing
L
Lorenzo Paleari
ETH Zurich, Zurich, Switzerland
J
Jia Hao Andrea Jiang
ETH Zurich, Zurich, Switzerland
R
Robert Gerstenberger
ETH Zurich, Zurich, Switzerland
Y
You Wu
ETH Zurich, Zurich, Switzerland
P
Patrick Iff
ETH Zurich, Zurich, Switzerland
A
Aleš Kubíček
ETH Zurich, Zurich, Switzerland
P
Piotr Nyczyk
Cledar, Wieliczka, Poland
D
Diana Khimey
ETH Zurich, Zurich, Switzerland
J
J'on Gunnar Hannesson
ETH Zurich, Zurich, Switzerland
G
Grzegorz Kwa'sniewski
ETH Zurich, Zurich, Switzerland
Marcin Copik
Marcin Copik
ETH Zürich
High-Performance ComputingServerless ComputingPerformance Modeling
H
H. Niewiadomski
Cledar, Wieliczka, Poland
Torsten Hoefler
Torsten Hoefler
Professor of Computer Science at ETH Zurich
High Performance ComputingDeep LearningNetworkingMessage Passing InterfaceParallel and Distributed Computing