🤖 AI Summary
To address the challenges of Advanced Persistent Threat (APT) attacks—namely, their high stealthiness in system audit logs, high false-positive rates of existing detection methods, coarse-grained alerts, and susceptibility to spurious correlations among node attributes—this paper proposes a multi-stage attack reconstruction framework integrating subgraph-level anomaly detection with large language models (LLMs). Methodologically: (i) it performs fine-grained subgraph anomaly detection on provenance graphs based on behavioral patterns rather than volatile attributes (e.g., file paths or IP addresses); (ii) it introduces an iterative LLM prompting and verification mechanism to generate accurate, interpretable, human-like narratives of end-to-end attack sequences. Our key contribution is the first synergistic optimization of detection and narrative generation. Evaluated on DARPA TC3, OpTC, and NODLINK datasets, the framework significantly improves detection accuracy and alert readability, enabling security analysts to efficiently reason about APT behaviors across their full lifecycle.
📝 Abstract
Advanced Persistent Threats (APTs) are stealthy cyberattacks that often evade detection in system-level audit logs. Provenance graphs model these logs as connected entities and events, revealing relationships that are missed by linear log representations. Existing systems apply anomaly detection to these graphs but often suffer from high false positive rates and coarse-grained alerts. Their reliance on node attributes like file paths or IPs leads to spurious correlations, reducing detection robustness and reliability. To fully understand an attack's progression and impact, security analysts need systems that can generate accurate, human-like narratives of the entire attack. To address these challenges, we introduce OCR-APT, a system for APT detection and reconstruction of human-like attack stories. OCR-APT uses Graph Neural Networks (GNNs) for subgraph anomaly detection, learning behavior patterns around nodes rather than fragile attributes such as file paths or IPs. This approach leads to a more robust anomaly detection. It then iterates over detected subgraphs using Large Language Models (LLMs) to reconstruct multi-stage attack stories. Each stage is validated before proceeding, reducing hallucinations and ensuring an interpretable final report. Our evaluations on the DARPA TC3, OpTC, and NODLINK datasets show that OCR-APT outperforms state-of-the-art systems in both detection accuracy and alert interpretability. Moreover, OCR-APT reconstructs human-like reports that comprehensively capture the attack story.