🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit human-like cognitive patterns across four foundational psychological paradigms: Thematic Apperception Test (TAT), framing effects, Moral Foundations Theory, and cognitive dissonance. Using structured prompt engineering and psychometrically grounded automated scoring, we conduct reproducible behavioral assessments on over a dozen open- and closed-weight LLMs. Our work is the first to systematically integrate established human cognitive theories into LLM evaluation. Results reveal robust cross-model tendencies: consistent positive framing bias; moral judgments disproportionately anchored in the “liberty/oppression” foundation; self-justification behaviors under cognitive conflict; and limited but statistically reliable psychological projection in narrative coherence and moral reasoning. By establishing an empirical foundation for “machine psychology,” this research introduces a novel paradigm for cognitive modeling of LLMs and trustworthy AI assessment.
📝 Abstract
We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluated several proprietary and open-source models using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety