Code Graph Model (CGM): A Graph-Integrated Large Language Model for Repository-Level Software Engineering Tasks

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-source large language models (LLMs) perform well on function-level code generation but struggle with repository-level software engineering tasks requiring cross-file and cross-function semantic and structural dependency understanding. Moreover, mainstream agent-based approaches suffer from privacy risks, limited controllability, and poor accessibility. To address these challenges, we propose the first end-to-end graph-enhanced framework that explicitly injects code graph structure into the LLM’s attention mechanism. Our approach employs a lightweight adapter to map graph node attributes into the language model’s input space and synergistically integrates graph neural network embeddings with graph-augmented retrieval-augmented generation (RAG). Evaluated on SWE-bench Lite using Qwen2.5-72B, our method achieves a 43.00% task resolution rate—the highest among all open-source models to date—and outperforms the previous state-of-the-art by 12.33 percentage points, ranking eighth overall.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have shown promise in function-level code generation, yet repository-level software engineering tasks remain challenging. Current solutions predominantly rely on proprietary LLM agents, which introduce unpredictability and limit accessibility, raising concerns about data privacy and model customization. This paper investigates whether open-source LLMs can effectively address repository-level tasks without requiring agent-based approaches. We demonstrate this is possible by enabling LLMs to comprehend functions and files within codebases through their semantic information and structural dependencies. To this end, we introduce Code Graph Models (CGMs), which integrate repository code graph structures into the LLM's attention mechanism and map node attributes to the LLM's input space using a specialized adapter. When combined with an agentless graph RAG framework, our approach achieves a 43.00% resolution rate on the SWE-bench Lite benchmark using the open-source Qwen2.5-72B model. This performance ranks first among open weight models, second among methods with open-source systems, and eighth overall, surpassing the previous best open-source model-based method by 12.33%.
Problem

Research questions and friction points this paper is trying to address.

Addressing repository-level software engineering tasks with open-source LLMs
Overcoming unpredictability and accessibility issues of proprietary LLM agents
Enhancing LLM comprehension of codebases via semantic and structural dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates code graph structures into LLM attention
Uses adapter to map node attributes
Agentless graph RAG framework boosts performance
🔎 Similar Papers
No similar papers found.
H
Hongyuan Tao
Ant Group, Hangzhou, China
Y
Ying Zhang
ShanghaiTech University, Shanghai, China
Z
Zhenhao Tang
Ant Group, Hangzhou, China
H
Hongen Peng
Ant Group, Hangzhou, China
X
Xukun Zhu
Zhejiang University, Hangzhou, China
B
Bingchang Liu
Ant Group, Hangzhou, China
Yingguang Yang
Yingguang Yang
University of Science and Technology of China
Ziyin Zhang
Ziyin Zhang
Shanghai Jiao Tong University
Artificial IntelligenceNatural Language ProcessingLarge Language Models
Z
Zhaogui Xu
Ant Group, Hangzhou, China
H
Haipeng Zhang
ShanghaiTech University, Shanghai, China
L
Linchao Zhu
Zhejiang University, Hangzhou, China
R
Rui Wang
Shanghai Jiaotong University, Shanghai, China
H
Hang Yu
Ant Group, Hangzhou, China
Jianguo Li
Jianguo Li
Director, Ant Group
deep learningcomputer visionmachine learningsystem
Peng Di
Peng Di
Senior Staff Engineer at Ant Group; Adjunct Associate Professor at UNSW Sydney
Parallel ComputingProgramming LanguageCompilerSoftware Engineering