🤖 AI Summary
Existing chemical reaction prediction models exhibit limited generalization to out-of-distribution (OOD) scenarios—such as novel patents, authors, temporal shifts, or unseen reaction types—hindering their practical utility in reaction discovery. Method: We introduce the first comprehensive OOD benchmark for organic reaction prediction, systematically covering multidimensional distributional shifts—including temporal evolution, cross-patent/author generalization, and cross-reaction-type extrapolation. Using SMILES-based deep learning models and a rigorous time-aware evaluation protocol, we assess performance under realistic deployment conditions. Contribution/Results: Our empirical analysis reveals that standard random train-test splits substantially overestimate model robustness: accuracy drops significantly on temporal and reaction-type extrapolation tasks. Critically, we provide the first empirical evidence that state-of-the-art models lack genuine chemical extrapolation capability. This work establishes a diagnostic framework and concrete improvement pathways for developing next-generation generalizable reaction prediction models.
📝 Abstract
Deep learning models for anticipating the products of organic reactions have found many use cases, including validating retrosynthetic pathways and constraining synthesis-based molecular design tools. Despite compelling performance on popular benchmark tasks, strange and erroneous predictions sometimes ensue when using these models in practice. The core issue is that common benchmarks test models in an in-distribution setting, whereas many real-world uses for these models are in out-of-distribution settings and require a greater degree of extrapolation. To better understand how current reaction predictors work in out-of-distribution domains, we report a series of more challenging evaluations of a prototypical SMILES-based deep learning model. First, we illustrate how performance on randomly sampled datasets is overly optimistic compared to performance when generalizing to new patents or new authors. Second, we conduct time splits that evaluate how models perform when tested on reactions published in years after those in their training set, mimicking real-world deployment. Finally, we consider extrapolation across reaction classes to reflect what would be required for the discovery of novel reaction types. This panel of tasks can reveal the capabilities and limitations of today's reaction predictors, acting as a crucial first step in the development of tomorrow's next-generation models capable of reaction discovery.