🤖 AI Summary
Existing evaluation methods inadequately assess large language models’ (LLMs) capabilities in real-world security operations center (SOC) settings—particularly for malware analysis and threat intelligence reasoning, two core defensive tasks.
Method: We introduce CyberSOCEval, the first open-source benchmark explicitly designed for defensive cybersecurity tasks. It features a fine-grained, scenario-driven evaluation framework integrating real-world malware samples and multi-source threat intelligence logical reasoning tasks, augmented with test-time scaling to enhance assessment robustness.
Contribution/Results: Experiments reveal that state-of-the-art LLMs exhibit substantial performance gaps on CyberSOCEval, with no saturation observed. Notably, reasoning-optimized models fail to replicate their advantages from mathematical or coding benchmarks in this domain—highlighting the need for domain-specific training data and reasoning mechanisms. CyberSOCEval establishes a reproducible, extensible evaluation standard and fosters community-driven advancement in AI-powered cybersecurity.
📝 Abstract
Today's cyber defenders are overwhelmed by a deluge of security alerts, threat intelligence signals, and shifting business context, creating an urgent need for AI systems to enhance operational security work. While Large Language Models (LLMs) have the potential to automate and scale Security Operations Center (SOC) operations, existing evaluations do not fully assess the scenarios most relevant to real-world defenders. This lack of informed evaluation impacts both AI developers and those applying LLMs to SOC automation. Without clear insight into LLM performance in real-world security scenarios, developers lack a north star for development, and users cannot reliably select the most effective models. Meanwhile, malicious actors are using AI to scale cyber attacks, highlighting the need for open source benchmarks to drive adoption and community-driven improvement among defenders and model developers. To address this, we introduce CyberSOCEval, a new suite of open source benchmarks within CyberSecEval 4. CyberSOCEval includes benchmarks tailored to evaluate LLMs in two tasks: Malware Analysis and Threat Intelligence Reasoning--core defensive domains with inadequate coverage in current benchmarks. Our evaluations show that larger, more modern LLMs tend to perform better, confirming the training scaling laws paradigm. We also find that reasoning models leveraging test time scaling do not achieve the same boost as in coding and math, suggesting these models have not been trained to reason about cybersecurity analysis, and pointing to a key opportunity for improvement. Finally, current LLMs are far from saturating our evaluations, showing that CyberSOCEval presents a significant challenge for AI developers to improve cyber defense capabilities.