SmartGuard: Leveraging Large Language Models for Network Attack Detection through Audit Log Analysis and Summarization

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audit log semantic analysis is confined to system-call granularity, limiting its ability to detect stealthy attacks, and relies on manually crafted rule sets—lacking both zero-day attack detection capability and interpretability. This paper proposes a fine-grained, endpoint-oriented attack detection method that transcends call-level abstraction by enabling function-level behavioral extraction and thread-aware event modeling. We construct an audit log knowledge graph and integrate graph embedding representations with large language models (LLMs), establishing the first LLM–graph collaborative framework supporting explainable diagnosis. The framework enables zero-day attack detection, natural-language-based attribution generation, and expert-in-the-loop fine-tuning. Experiments demonstrate an average F1-score of 96%, with strong robustness across multi-model transfer scenarios and high generalizability to previously unseen attacks.

Technology Category

Application Category

📝 Abstract
End-point monitoring solutions are widely deployed in today's enterprise environments to support advanced attack detection and investigation. These monitors continuously record system-level activities as audit logs and provide deep visibility into security events. Unfortunately, existing methods of semantic analysis based on audit logs have low granularity, only reaching the system call level, making it difficult to effectively classify highly covert behaviors. Additionally, existing works mainly match audit log streams with rule knowledge bases describing behaviors, which heavily rely on expertise and lack the ability to detect unknown attacks and provide interpretive descriptions. In this paper, we propose SmartGuard, an automated method that combines abstracted behaviors from audit event semantics with large language models. SmartGuard extracts specific behaviors (function level) from incoming system logs and constructs a knowledge graph, divides events by threads, and combines event summaries with graph embeddings to achieve information diagnosis and provide explanatory narratives through large language models. Our evaluation shows that SmartGuard achieves an average F1 score of 96% in assessing malicious behaviors and demonstrates good scalability across multiple models and unknown attacks. It also possesses excellent fine-tuning capabilities, allowing experts to assist in timely system updates.
Problem

Research questions and friction points this paper is trying to address.

Detect highly covert behaviors in audit logs
Overcome reliance on rule-based knowledge for attack detection
Provide interpretive descriptions for unknown attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs for network attack detection
Constructs knowledge graph from audit logs
Combines event summaries with graph embeddings
🔎 Similar Papers
No similar papers found.
H
Hao Zhang
State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, 310007, China, and also with the Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security, Hangzhou, 310051, China
Shuo Shao
Shuo Shao
Zhejiang University
AI Copyright ProtectionData ProtectionLLM Safety
S
Song Li
State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, 310007, China, and also with the Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security, Hangzhou, 310051, China
Zhenyu Zhong
Zhenyu Zhong
Ant Group
security
Y
Yan Liu
Ant Group, Hangzhou, 310063, China
Zhan Qin
Zhan Qin
Researcher, Zhejiang University
Data Security and PrivacyAI Security
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security