Enhancing Computational Cognitive Architectures with LLMs: A Case Study

📅 2025-09-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of simultaneously achieving real-world complexity and psychological plausibility when integrating large language models (LLMs) with computational cognitive architectures. We propose a hybrid framework grounded in CLARION’s implicit–explicit dual-system architecture, enabling bidirectional information exchange and functional complementarity between LLMs and cognitive modules via modular interface design, prompt engineering, and in-context learning. To our knowledge, this is the first work to deeply embed LLMs within a classical cognitive architecture while preserving psychological validity. The integration significantly enhances reasoning, language understanding, and task generalization capabilities. Empirical evaluation demonstrates synergistic advantages in both computational efficiency and cognitive fidelity. Our approach establishes a novel paradigm for next-generation intelligent agents that jointly optimize high computational power and strong psychological validity.

Technology Category

Application Category

📝 Abstract
Computational cognitive architectures are broadly scoped models of the human mind that combine different psychological functionalities (as well as often different computational methods for these different functionalities) into one unified framework. They structure them in a psychologically plausible and validated way. However, such models thus far have only limited computational capabilities, mostly limited by the computational tools and techniques that were adopted. More recently, LLMs have proved to be more capable computationally than any other tools. Thus, in order to deal with both real-world complexity and psychological realism at the same time, incorporating LLMs into cognitive architectures naturally becomes an important task. In the present article, a synergistic combination of the Clarion cognitive architecture and LLMs is discussed as a case study. The implicit-explicit dichotomy that is fundamental to Clarion is leveraged for a seamless integration of Clarion and LLMs. As a result, computational power of LLMs is combined with psychological nicety of Clarion.
Problem

Research questions and friction points this paper is trying to address.

Enhancing cognitive architectures with LLMs
Integrating LLMs into Clarion architecture
Combining computational power with psychological realism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating LLMs into Clarion cognitive architecture
Leveraging implicit-explicit dichotomy for seamless integration
Combining LLMs' power with psychological realism