đ¤ AI Summary
Large language models (LLMs) exhibit fundamental limitations in abstract linguistic reasoningâparticularly rule induction and cross-phenomenon generalizationâdue to their reliance on statistical patterns rather than formal metacognitive inference. Method: We introduce IOLBENCH, the first benchmark for LLM evaluation grounded in authentic International Linguistics Olympiad (IOL) problems. It spans syntax, morphology, phonology, and semantics, emphasizing self-contained, knowledge-agnostic metacognitive reasoning. For the first time, it systematically incorporates multi-step abstract induction and compositional generalization tasks from IOL into LLM assessment, with a zero-shot, multi-task evaluation protocol and a rigorous automated scoring system designed to eliminate external knowledge contamination. Contribution/Results: Experiments reveal that state-of-the-art LLMs substantially underperform human Olympiad participantsâespecially in few-shot compositional reasoning and abstraction of cross-linguistic regularitiesâexposing a critical deficit in formal linguistic rule discovery capability.
đ Abstract
Despite the remarkable advancements and widespread applications of deep neural networks, their ability to perform reasoning tasks remains limited, particularly in domains requiring structured, abstract thought. In this paper, we investigate the linguistic reasoning capabilities of state-of-the-art large language models (LLMs) by introducing IOLBENCH, a novel benchmark derived from International Linguistics Olympiad (IOL) problems. This dataset encompasses diverse problems testing syntax, morphology, phonology, and semantics, all carefully designed to be self-contained and independent of external knowledge. These tasks challenge models to engage in metacognitive linguistic reasoning, requiring the deduction of linguistic rules and patterns from minimal examples. Through extensive benchmarking of leading LLMs, we find that even the most advanced models struggle to handle the intricacies of linguistic complexity, particularly in areas demanding compositional generalization and rule abstraction. Our analysis highlights both the strengths and persistent limitations of current models in linguistic problem-solving, offering valuable insights into their reasoning capabilities. By introducing IOLBENCH, we aim to foster further research into developing models capable of human-like reasoning, with broader implications for the fields of computational linguistics and artificial intelligence.