🤖 AI Summary
Existing strategic reasoning research predominantly employs coarse-grained “presence/absence of information” semantics, failing to distinguish the distinct roles and mechanisms of first-order, higher-order, and common knowledge.
Method: We propose a fine-grained strategic knowledge model grounded in dynamic epistemic logic to formally differentiate these knowledge types; integrating model checking with game-theoretic analysis, we systematically investigate their impact on multi-agent cooperation.
Contribution/Results: We prove that common knowledge is a necessary condition for solvability of distributed consensus problems; empirically, higher-order knowledge significantly improves strategic performance in cooperative games such as Hanabi. Furthermore, we establish the decidability boundary for model checking under this knowledge model. Our framework provides a more precise epistemic representation for multi-agent strategic reasoning and yields verifiable, analytically tractable tools for knowledge-sensitive protocol design and verification.
📝 Abstract
Most existing work on strategic reasoning simply adopts either an informed or an uninformed semantics. We propose a model where knowledge of strategies can be specified on a fine-grained level. In particular, it is possible to distinguish first-order, higher-order, and common knowledge of strategies. We illustrate the effect of higher-order knowledge of strategies by studying the game Hanabi. Further, we show that common knowledge of strategies is necessary to solve the consensus problem. Finally, we study the decidability of the model checking problem.