🤖 AI Summary
This study addresses the growing disconnect between AI cognition research and general cognitive science—specifically, the lack of rigorous frameworks for comparing cognitive capacities across AI systems, humans, and non-human animals, which risks erroneous inferences about similarity or divergence. To bridge this gap, we introduce “Comparative Cognition for AI” as a novel paradigm, systematically integrating psychometrics, cross-species behavioral experimental design, cognitive modeling, and interpretability evaluation. Our key contribution is a dual-dimension standard for AI cognitive assessment: *functional equivalence* (behavioral parity under comparable task conditions) and *mechanistic comparability* (structural and process-level alignment amenable to cross-system analysis). This framework transcends superficial behavioral analogy while avoiding anthropomorphic bias. It provides both theoretical grounding and methodological guidance for principled, cross-domain intelligence comparison, thereby advancing the integration of AI research into mainstream cognitive science.
📝 Abstract
Researchers are increasingly subjecting artificial intelligence systems to psychological testing. But to rigorously compare their cognitive capacities with humans and other animals, we must avoid both over- and under-stating our similarities and differences. By embracing a comparative approach, we can integrate AI cognition research into the broader cognitive sciences.