The Silicon Psyche: Anthropomorphic Vulnerabilities in Large Language Models

📅 2025-12-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical gap in large language model (LLM) safety evaluation by highlighting the overlooked inheritance of human psychological vulnerabilities from training data. The authors introduce the concept of Anthropomorphic Vulnerability Inheritance (AVI) and, for the first time, adapt 100 psychological vulnerability indicators from the Cybersecurity Psychology Framework (CPF) into adversarial scenarios tailored for LLMs. They develop a Synthetic Psychological Assessment Protocol (SysPAP) to systematically evaluate these vulnerabilities. Cross-model experiments reveal that while mainstream LLMs effectively resist conventional jailbreaking attacks, they remain highly susceptible to psychological manipulations such as authority gradients and time pressure. These findings underscore the urgent need to develop “psychological firewalls” to safeguard LLMs against exploitation through human-like cognitive weaknesses.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are rapidly transitioning from conversational assistants to autonomous agents embedded in critical organizational functions, including Security Operations Centers (SOCs), financial systems, and infrastructure management. Current adversarial testing paradigms focus predominantly on technical attack vectors: prompt injection, jailbreaking, and data exfiltration. We argue this focus is catastrophically incomplete. LLMs, trained on vast corpora of human-generated text, have inherited not merely human knowledge but human \textit{psychological architecture} -- including the pre-cognitive vulnerabilities that render humans susceptible to social engineering, authority manipulation, and affective exploitation. This paper presents the first systematic application of the Cybersecurity Psychology Framework (\cpf{}), a 100-indicator taxonomy of human psychological vulnerabilities, to non-human cognitive agents. We introduce the \textbf{Synthetic Psychometric Assessment Protocol} (\sysname{}), a methodology for converting \cpf{} indicators into adversarial scenarios targeting LLM decision-making. Our preliminary hypothesis testing across seven major LLM families reveals a disturbing pattern: while models demonstrate robust defenses against traditional jailbreaks, they exhibit critical susceptibility to authority-gradient manipulation, temporal pressure exploitation, and convergent-state attacks that mirror human cognitive failure modes. We term this phenomenon \textbf{Anthropomorphic Vulnerability Inheritance} (AVI) and propose that the security community must urgently develop ``psychological firewalls''-- intervention mechanisms adapted from the Cybersecurity Psychology Intervention Framework (\cpif{}) -- to protect AI agents operating in adversarial environments.
Problem

Research questions and friction points this paper is trying to address.

Anthropomorphic Vulnerability
Large Language Models
Cybersecurity Psychology
Social Engineering
Cognitive Vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Anthropomorphic Vulnerability Inheritance
Cybersecurity Psychology Framework
Synthetic Psychometric Assessment Protocol
psychological firewalls
cognitive vulnerabilities
🔎 Similar Papers
No similar papers found.
G
Giuseppe Canale
CPF3.org, Independent Researcher
Kashyap Thimmaraju
Kashyap Thimmaraju
TU Berlin
CybersecurityHuman Performance