Architectural Precedents for General Agents using Large Language Models

📅 2025-05-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic architectural principles guiding agentic large language model (LLM) systems. We establish, for the first time, a design-pattern mapping between pre-Transformer classical cognitive architectures and modern agentic LLM systems. Through cognitive architecture analysis, pattern abstraction, and empirical behavioral study of LLM agents, we identify six high-frequency cognitive design patterns—explicitly or implicitly instantiated in current systems. Building on this, we propose a pattern-matching–based defect prediction framework that reveals three critical capability gaps: dynamic goal reconfiguration, multi-granularity memory scheduling, and context-sensitive meta-reasoning. Our findings elucidate core architectural principles underlying artificial general intelligence (AGI) and yield a verifiable, pattern-driven roadmap for next-generation agentic architectures. This work bridges a foundational gap in AGI research by enabling cross-paradigm theoretical integration between classical cognitive science and contemporary LLM-based agency.

Technology Category

Application Category

📝 Abstract
One goal of AI (and AGI) is to identify and understand specific mechanisms and representations sufficient for general intelligence. Often, this work manifests in research focused on architectures and many cognitive architectures have been explored in AI/AGI. However, different research groups and even different research traditions have somewhat independently identified similar/common patterns of processes and representations or cognitive design patterns that are manifest in existing architectures. Today, AI systems exploiting large language models (LLMs) offer a relatively new combination of mechanism and representation available for exploring the possibilities of general intelligence. In this paper, we summarize a few recurring cognitive design patterns that have appeared in various pre-transformer AI architectures. We then explore how these patterns are evident in systems using LLMs, especially for reasoning and interactive ("agentic") use cases. By examining and applying these recurring patterns, we can also predict gaps or deficiencies in today's Agentic LLM Systems and identify likely subjects of future research towards general intelligence using LLMs and other generative foundation models.
Problem

Research questions and friction points this paper is trying to address.

Identify mechanisms for general intelligence in AI/AGI architectures
Explore cognitive design patterns in pre-transformer and LLM systems
Predict gaps in Agentic LLM Systems for future research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Summarize cognitive design patterns from pre-transformer AI
Explore patterns in LLM systems for reasoning
Predict gaps in Agentic LLM Systems for future research
🔎 Similar Papers
2023-08-22Frontiers Comput. Sci.Citations: 866
R
Robert E. Wray
Center for Integrated Cognition, IQMRI, Ann Arbor, MI 28105 USA
James R. Kirk
James R. Kirk
Research Scientist, Center for Integrated Cognition
Cognitive ArchitecturesInteractive Task LearningArtificial Intelligence
J
John E. Laird
Center for Integrated Cognition, IQMRI, Ann Arbor, MI 28105 USA