๐ค AI Summary
This work addresses the lack of systematic evaluation for symbolic, verifiable reasoning over molecular graph structures in current chemical large language models. Existing benchmarks often suffer from label bias or information leakage, hindering precise diagnosis of model shortcomings. To bridge this gap, we propose MolecularIQโthe first evaluation framework specifically designed for symbolic reasoning on molecular graphs. By integrating molecular graph representations, symbolic logic verification, and carefully structured reasoning tasks, MolecularIQ establishes a fine-grained benchmark that effectively uncovers systematic failure modes of contemporary models across specific molecular structures and reasoning challenges. This framework provides interpretable diagnostic insights and actionable directions for developing chemical large language models with faithful structural understanding capabilities.
๐ Abstract
A molecule's properties are fundamentally determined by its composition and structure encoded in its molecular graph. Thus, reasoning about molecular properties requires the ability to parse and understand the molecular graph. Large Language Models (LLMs) are increasingly applied to chemistry, tackling tasks such as molecular name conversion, captioning, text-guided generation, and property or reaction prediction. Most existing benchmarks emphasize general chemical knowledge, rely on literature or surrogate labels that risk leakage or bias, or reduce evaluation to multiple-choice questions. We introduce MolecularIQ, a molecular structure reasoning benchmark focused exclusively on symbolically verifiable tasks. MolecularIQ enables fine-grained evaluation of reasoning over molecular graphs and reveals capability patterns that localize model failures to specific tasks and molecular structures. This provides actionable insights into the strengths and limitations of current chemistry LLMs and guides the development of models that reason faithfully over molecular structure.