🤖 AI Summary
This paper investigates whether large language models (LLMs) possess genuine emergent intelligence. Addressing the central question—“Do emergent capabilities equate to emergent intelligence?”—it pioneers the integration of rigorous emergence analysis within the theoretical framework of complex systems, proposing empirically verifiable criteria to distinguish apparent from authentic emergent intelligence. Methodologically, the study combines phase-transition analysis, effective dimensionality reduction, and multi-scale modeling, complemented by empirical behavioral evaluation of LLMs. Key contributions include: (i) identifying a scale-driven, efficiency-oriented mechanism underlying emergent intelligence in LLMs; (ii) systematically refuting several prevalent pseudo-emergence claims; and (iii) demonstrating that LLMs exhibit bounded yet genuine emergent intelligence—evidenced by abrupt improvements in problem-solving capability, reduced energy consumption, and significantly enhanced generalization efficiency upon crossing critical model-scale thresholds.
📝 Abstract
Emergence is a concept in complexity science that describes how many-body systems manifest novel higher-level properties, properties that can be described by replacing high-dimensional mechanisms with lower-dimensional effective variables and theories. This is captured by the idea"more is different". Intelligence is a consummate emergent property manifesting increasingly efficient -- cheaper and faster -- uses of emergent capabilities to solve problems. This is captured by the idea"less is more". In this paper, we first examine claims that Large Language Models exhibit emergent capabilities, reviewing several approaches to quantifying emergence, and secondly ask whether LLMs possess emergent intelligence.