🤖 AI Summary
This work addresses the lack of systematic architectural principles guiding agentic large language model (LLM) systems. We establish, for the first time, a design-pattern mapping between pre-Transformer classical cognitive architectures and modern agentic LLM systems. Through cognitive architecture analysis, pattern abstraction, and empirical behavioral study of LLM agents, we identify six high-frequency cognitive design patterns—explicitly or implicitly instantiated in current systems. Building on this, we propose a pattern-matching–based defect prediction framework that reveals three critical capability gaps: dynamic goal reconfiguration, multi-granularity memory scheduling, and context-sensitive meta-reasoning. Our findings elucidate core architectural principles underlying artificial general intelligence (AGI) and yield a verifiable, pattern-driven roadmap for next-generation agentic architectures. This work bridges a foundational gap in AGI research by enabling cross-paradigm theoretical integration between classical cognitive science and contemporary LLM-based agency.
📝 Abstract
One goal of AI (and AGI) is to identify and understand specific mechanisms and representations sufficient for general intelligence. Often, this work manifests in research focused on architectures and many cognitive architectures have been explored in AI/AGI. However, different research groups and even different research traditions have somewhat independently identified similar/common patterns of processes and representations or cognitive design patterns that are manifest in existing architectures. Today, AI systems exploiting large language models (LLMs) offer a relatively new combination of mechanism and representation available for exploring the possibilities of general intelligence. In this paper, we summarize a few recurring cognitive design patterns that have appeared in various pre-transformer AI architectures. We then explore how these patterns are evident in systems using LLMs, especially for reasoning and interactive ("agentic") use cases. By examining and applying these recurring patterns, we can also predict gaps or deficiencies in today's Agentic LLM Systems and identify likely subjects of future research towards general intelligence using LLMs and other generative foundation models.