🤖 AI Summary
This study investigates whether large language models (LLMs) are evolving from traditional intuitive reasoning toward modern formal logic in their logical inference capabilities. To this end, we construct the first syllogistic evaluation dataset that integrates both traditional and modern logical standards, using existential presupposition as a probing lens to systematically assess the reasoning performance of mainstream LLMs. Our findings reveal that increasing model scale drives alignment with modern logic, while reasoning mechanisms such as chain-of-thought significantly accelerate this transition. Moreover, the choice of base model critically influences the ease and stability of this shift. This work provides the first systematic tracing of the evolutionary trajectory of logical reasoning in LLMs and identifies key factors shaping this development.
📝 Abstract
Human logic has gradually shifted from intuition-driven inference to rigorous formal systems. Motivated by recent advances in large language models (LLMs), we explore whether LLMs exhibit a similar evolution in the underlying logical framework. Using existential import as a probe, we for evaluate syllogism under traditional and modern logic. Through extensive experiments of testing SOTA LLMs on a new syllogism dataset, we have some interesting findings: (i) Model size scaling promotes the shift toward modern logic; (ii) Thinking serves as an efficient accelerator beyond parameter scaling; (iii) the Base model plays a crucial role in determining how easily and stably this shift can emerge. Beyond these core factors, we conduct additional experiments for in-depth analysis of properties of current LLMs on syllogistic reasoning.