๐ค AI Summary
Current cognitive architectures and autonomous agent research lack a unified, theoretically grounded definition of emotion.
Method: We propose the formulation โemotion as goal-performance discrepancy-driven, cognitively identifiable activity patterns,โ arguing that even logical reasoning processes can elicit emotion and revealing its intrinsic coupling with attention. We construct a mapping framework linking emotion to multi-scale cognitive activities, integrating classical emotion theories; design a parameterized generative function to formally characterize emotionโs dynamic computational mechanism; and implement synergistic modeling of discrepancy detection, attentional modulation, and emotion representation within a cognitive architecture.
Contribution/Results: This work presents the first formal, computationally executable emotion model in a purely logical system. It delivers a unified computational framework for emotion that is both interpretable and engineering-practical, establishing foundational theoretical and methodological groundwork for emotion modeling in autonomous intelligent agents.
๐ Abstract
Emotions play a crucial role in human life. The research community has proposed many theories on emotions without reaching much consensus. The situation is similar for emotions in cognitive architectures and autonomous agents. I propose in this paper that emotions are recognized patterns of cognitive activities. These activities are responses of an agent to the deviations between the targets of its goals and the performances of its actions. Emotions still arise even if these activities are purely logical. I map the patterns of cognitive activities to emotions. I show the link between emotions and attention and the impacts of the parameterized functions in the cognitive architecture on the computing of emotions. My proposition bridges different theories on emotions and advances the building of consensus.