A Unified Framework for Human AI Collaboration in Security Operations Centers with Trusted Autonomy

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current SOC frameworks overemphasize automation while neglecting systematic human-AI collaboration—particularly human oversight, dynamic trust calibration, and scalable AI autonomy—and predominantly adopt static, binary autonomy settings, rendering them ill-suited to heterogeneous task complexity and risk profiles. To address this, we propose the first unified human-AI collaborative framework for SOCs, featuring a novel five-level AI autonomy model that formally characterizes the mapping among autonomy levels, trust thresholds, and human-in-the-loop (HITL) roles. The framework integrates a fine-tuned LLM-driven cybersecurity AI-Avatar, a trust-aware HITL mechanism, a hierarchical autonomous policy engine, and a simulation-based cyber range for empirical validation. Experiments demonstrate significant mitigation of alert fatigue, improved response coordination efficiency, and high-fidelity, interpretable AI-augmented decision-making—all while preserving human authority and accountability.

Technology Category

Application Category

📝 Abstract
This article presents a structured framework for Human-AI collaboration in Security Operations Centers (SOCs), integrating AI autonomy, trust calibration, and Human-in-the-loop decision making. Existing frameworks in SOCs often focus narrowly on automation, lacking systematic structures to manage human oversight, trust calibration, and scalable autonomy with AI. Many assume static or binary autonomy settings, failing to account for the varied complexity, criticality, and risk across SOC tasks considering Humans and AI collaboration. To address these limitations, we propose a novel autonomy tiered framework grounded in five levels of AI autonomy from manual to fully autonomous, mapped to Human-in-the-Loop (HITL) roles and task-specific trust thresholds. This enables adaptive and explainable AI integration across core SOC functions, including monitoring, protection, threat detection, alert triage, and incident response. The proposed framework differentiates itself from previous research by creating formal connections between autonomy, trust, and HITL across various SOC levels, which allows for adaptive task distribution according to operational complexity and associated risks. The framework is exemplified through a simulated cyber range that features the cybersecurity AI-Avatar, a fine-tuned LLM-based SOC assistant. The AI-Avatar case study illustrates human-AI collaboration for SOC tasks, reducing alert fatigue, enhancing response coordination, and strategically calibrating trust. This research systematically presents both the theoretical and practical aspects and feasibility of designing next-generation cognitive SOCs that leverage AI not to replace but to enhance human decision-making.
Problem

Research questions and friction points this paper is trying to address.

Lack of systematic human oversight and trust calibration in SOCs
Static autonomy settings ignore task complexity and risk variation
Need adaptive AI integration for SOC functions with human collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tiered AI autonomy framework for SOCs
Human-in-the-loop with trust calibration
AI-Avatar using fine-tuned LLM assistant
🔎 Similar Papers
No similar papers found.
A
Ahmad Mohsin
Centre for Securing Digital Futures, School of Science, Edith Cowan University, Australia
Helge Janicke
Helge Janicke
Edith Cowan University
Computer ScienceCyber SecurityDigital ForensicsControl SystemsCyber Physical Systems
A
Ahmed Ibrahim
Centre for Securing Digital Futures, School of Science, Edith Cowan University, Australia
I
Iqbal H. Sarker
Centre for Securing Digital Futures, School of Science, Edith Cowan University, Australia
S
S. Çamtepe
CSIRO’s Data61, Australia