Efficient Graph Understanding with LLMs via Structured Context Injection

📅 2025-08-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to comprehend complex graph structures and perform conceptual graph reasoning without fine-tuning. Method: This paper proposes a structured context injection framework that requires no parameter modification. Leveraging prompt engineering and structured input construction, it maps graph tasks into a conceptual representation space, implicitly aligning graph topology with semantics at the input layer—eliminating the need for multi-step querying or costly fine-tuning. Contribution/Results: The framework is compatible with both lightweight and large-scale LLMs, achieving significant accuracy gains across multiple graph reasoning benchmarks. Its performance matches or surpasses state-of-the-art fine-tuned methods while drastically reducing computational overhead. It offers high efficiency, broad applicability across diverse graph tasks, and seamless scalability to larger models and datasets.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown strong capabilities in solving problems across domains, including graph-related tasks traditionally addressed by symbolic or algorithmic methods. In this work, we present a framework for structured context injection, where task-specific information is systematically embedded in the input to guide LLMs in solving a wide range of graph problems. Our method does not require fine-tuning of LLMs, making it cost-efficient and lightweight. We observe that certain graph reasoning tasks remain challenging for LLMs unless they are mapped to conceptually grounded representations. However, achieving such mappings through fine-tuning or repeated multi-step querying can be expensive and inefficient. Our approach offers a practical alternative by injecting structured context directly into the input, enabling the LLM to implicitly align the task with grounded conceptual spaces. We evaluate the approach on multiple graph tasks using both lightweight and large models, highlighting the trade-offs between accuracy and computational cost. The results demonstrate consistent performance improvements, showing that structured input context can rival or surpass more complex approaches. Our findings underscore the value of structured context injection as an effective and scalable strategy for graph understanding with LLMs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing graph reasoning tasks using structured context injection
Enabling LLMs to solve graph problems without fine-tuning
Improving accuracy and efficiency in graph understanding with LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured context injection without fine-tuning
Direct input embedding for graph alignment
Lightweight framework for efficient graph understanding
🔎 Similar Papers
No similar papers found.
G
Govind Waghmare
Mastercard
S
Sumedh BG
Mastercard
S
Sonia Gupta
Mastercard
Srikanta Bedathur
Srikanta Bedathur
IIT Delhi
DatabasesInformation RetrievalData Mining