🤖 AI Summary
This study investigates the reasoning capabilities of large language models (LLMs) in formal rule-based environments, focusing on state prediction and legal action generation within the domain of general game playing. For the first time, general game playing is employed as a benchmark framework, integrating 40-dimensional structural features and multiple semantic obfuscation strategies to systematically evaluate multi-step forward reasoning performance across models including Gemini 2.5 Pro/Flash, Llama 3.3 70B, and GPT-OSS 120B. The findings reveal that while models perform well on short-horizon tasks, their accuracy degrades significantly with increasing reasoning depth. The work further identifies characteristic failure modes—such as rule hallucination, redundant factual assertions, and syntactic errors—thereby delineating the current boundaries of LLMs’ formal logical reasoning and elucidating their reliance on semantic heuristics rather than rigorous deductive mechanisms.
📝 Abstract
This paper examines the reasoning capabilities of Large Language Models (LLMs) from a novel perspective, focusing on their ability to operate within formally specified, rule-governed environments. We evaluate four LLMs (Gemini 2.5 Pro and Flash variants, Llama 3.3 70B and GPT-OSS 120B) on a suite of forward-simulation tasks-including next / multistep state formulation, and legal action generation-across a diverse set of reasoning problems illustrated through General Game Playing (GGP) game instances. Beyond reporting instance-level performance, we characterize games based on 40 structural features and analyze correlations between these features and LLM performance. Furthermore, we investigate the effects of various game obfuscations to assess the role of linguistic semantics in game definitions and the impact of potential prior exposure of LLMs to specific games during training. The main results indicate that three of the evaluated models generally perform well across most experimental settings, with performance degradation observed as the evaluation horizon increases (i.e., with a higher number of game steps). Detailed case-based analysis of the LLM performance provides novel insights into common reasoning errors in the considered logic-based problem formulation, including hallucinated rules, redundant state facts, or syntactic errors. Overall, the paper reports clear progress in formal reasoning capabilities of contemporary models.