AI Through the Human Lens: Investigating Cognitive Theories in Machine Psychology

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit human-like cognitive patterns across four foundational psychological paradigms: Thematic Apperception Test (TAT), framing effects, Moral Foundations Theory, and cognitive dissonance. Using structured prompt engineering and psychometrically grounded automated scoring, we conduct reproducible behavioral assessments on over a dozen open- and closed-weight LLMs. Our work is the first to systematically integrate established human cognitive theories into LLM evaluation. Results reveal robust cross-model tendencies: consistent positive framing bias; moral judgments disproportionately anchored in the “liberty/oppression” foundation; self-justification behaviors under cognitive conflict; and limited but statistically reliable psychological projection in narrative coherence and moral reasoning. By establishing an empirical foundation for “machine psychology,” this research introduces a novel paradigm for cognitive modeling of LLMs and trustworthy AI assessment.

Technology Category

Application Category

📝 Abstract
We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluated several proprietary and open-source models using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety
Problem

Research questions and friction points this paper is trying to address.

Do LLMs show human-like cognitive patterns under psychological frameworks?
How do LLMs respond to framing bias and moral judgment tests?
What are the implications of LLM cognitive behaviors for AI ethics?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating LLMs using psychological frameworks
Structured prompts and automated scoring
Analyzing cognitive patterns in AI models
🔎 Similar Papers
No similar papers found.