CogToM: A Comprehensive Theory of Mind Benchmark inspired by Human Cognition for Large Language Models

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited scope of existing theory-of-mind (ToM) evaluations for large language models, which often rely on a single paradigm and fail to capture the full spectrum of human social cognition. To overcome this, the authors propose a comprehensive ToM benchmark inspired by human cognitive theories, encompassing 46 distinct task paradigms and over 8,000 bilingual (Chinese–English) samples, enabling the first multidimensional and structured assessment of ToM capabilities. Through large-scale human annotation, systematic evaluation across 22 prominent language models, and benchmark design grounded in cognitive science, the study reveals significant deficiencies in current models’ performance on complex ToM tasks, highlighting a structural gap between artificial systems and human-like social reasoning mechanisms.

Technology Category

Application Category

📝 Abstract
Whether Large Language Models (LLMs) truly possess human-like Theory of Mind (ToM) capabilities has garnered increasing attention. However, existing benchmarks remain largely restricted to narrow paradigms like false belief tasks, failing to capture the full spectrum of human cognitive mechanisms. We introduce CogToM, a comprehensive, theoretically grounded benchmark comprising over 8000 bilingual instances across 46 paradigms, validated by 49 human annotator.A systematic evaluation of 22 representative models, including frontier models like GPT-5.1 and Qwen3-Max, reveals significant performance heterogeneities and highlights persistent bottlenecks in specific dimensions. Further analysis based on human cognitive patterns suggests potential divergences between LLM and human cognitive structures. CogToM offers a robust instrument and perspective for investigating the evolving cognitive boundaries of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Theory of Mind
Large Language Models
cognitive benchmark
human cognition
false belief tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theory of Mind
cognitive benchmark
large language models
human cognition
bilingual evaluation
🔎 Similar Papers
No similar papers found.
H
Haibo Tong
BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, School of Artificial Intelligence, UCAS
Z
Zeyang Yue
BrainCog Lab, Institute of Automation, Chinese Academy of Sciences
F
Feifei Zhao
BrainCog Lab, Institute of Automation, Chinese Academy of Sciences
E
Erliang Lin
BrainCog Lab, Institute of Automation, Chinese Academy of Sciences
L
Lu Jia
BrainCog Lab, Institute of Automation, Chinese Academy of Sciences
R
Ruolin Chen
BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, School of Artificial Intelligence, UCAS
Y
Yinqian Sun
BrainCog Lab, Institute of Automation, Chinese Academy of Sciences
Q
Qian Zhang
BrainCog Lab, Institute of Automation, Chinese Academy of Sciences
Yi Zeng
Yi Zeng
Institute of Automation, Chinese Academy of Sciences
Brain-inspired AIAI SafetyAI Ethics and Governance