🤖 AI Summary
Cyberattacks increasingly exploit ambiguity to evade detection, yet existing security analytics lack models of attacker cognition. Method: This work pioneers the incorporation of “ambiguity aversion”—a well-established cognitive bias from psychology—into cybersecurity, formalizing it as a quantifiable, attacker-specific trait. Leveraging red-team experiments, we collect multimodal attack data and deploy a large language model–driven log parsing pipeline to automatically map unstructured system logs to the MITRE ATT&CK framework. We then design a sequence-based computational model that infers individual-level ambiguity aversion in real time from observed attack behavior. Contribution/Results: Our approach transcends conventional behavior-centric analysis by enabling interpretable, operationally actionable modeling of attacker cognition. It establishes both theoretical foundations and a technical framework for generating adaptive, cognitively informed defense strategies tailored to individual adversaries.
📝 Abstract
Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the ability to model and detect when they exhibit ambiguity aversion, a cognitive bias reflecting a preference for known (versus unknown) probabilities. We introduce a novel methodological framework that (1) leverages rich, multi-modal data from human-subjects red-team experiments, (2) employs a large language model (LLM) pipeline to parse unstructured logs into MITRE ATT&CK-mapped action sequences, and (3) applies a new computational model to infer an attacker's ambiguity aversion level in near-real time. By operationalizing this cognitive trait, our work provides a foundational component for developing adaptive cognitive defense strategies.