๐ค AI Summary
This work proposes a unified theoretical framework to uncover the universal organizational principles of cognition across natural and artificial systems, aiming to coherently explain problem-solving mechanisms in multiscale agentsโfrom biological cells to artificial intelligence models. The central thesis posits that cognition fundamentally arises from remapping and navigation within embedding spaces, enabling adaptive behavior through iterative error minimization. Integrating embedding space modeling, distributed error correction, and iterative optimization, the approach encompasses diverse architectures including Transformers, diffusion models, and neural cellular automata. For the first time, remapping and navigation are established as substrate- and scale-invariant cognitive mechanisms, bridging biological and artificial intelligence and laying the foundation for the development of cross-scale adaptive intelligent systems.
๐ Abstract
The emerging field of diverse intelligence seeks an integrated view of problem-solving in agents of very different provenance, composition, and substrates. From subcellular chemical networks to swarms of organisms, and across evolved, engineered, and chimeric systems, it is hypothesized that scale-invariant principles of decision-making can be discovered. We propose that cognition in both natural and synthetic systems can be characterized and understood by the interplay between two equally important invariants: (1) the remapping of embedding spaces, and (2) the navigation within these spaces. Biological collectives, from single cells to entire organisms (and beyond), remap transcriptional, morphological, physiological, or 3D spaces to maintain homeostasis and regenerate structure, while navigating these spaces through distributed error correction. Modern Artificial Intelligence (AI) systems, including transformers, diffusion models, and neural cellular automata enact analogous processes by remapping data into latent embeddings and refining them iteratively through contextualization. We argue that this dual principle - remapping and navigation of embedding spaces via iterative error minimization - constitutes a substrate-independent invariant of cognition. Recognizing this shared mechanism not only illuminates deep parallels between living systems and artificial models, but also provides a unifying framework for engineering adaptive intelligence across scales.