🤖 AI Summary
This study addresses enzymatic reaction prediction—a core challenge in biochemical modeling—by proposing the first unified multitask framework based on Llama-3.1 (8B/70B), jointly modeling EC number prediction, forward synthesis, and retrosynthesis. The method employs multitask learning, parameter-efficient fine-tuning via LoRA, and structured prompt engineering to enhance generalization under low-resource conditions. It is the first to systematically identify and characterize inherent limitations of large language models (LLMs) in hierarchical EC classification, thereby advancing biochemical knowledge representation paradigms. Experiments demonstrate consistent superiority over single-task baselines across both forward and retrosynthetic tasks; notably, the approach maintains robust performance under few-shot settings. These results substantiate that LLMs can effectively encode, retain, and transfer enzyme-specific biochemical knowledge, offering a scalable foundation for data-scarce enzymology applications.
📝 Abstract
Predicting enzymatic reactions is crucial for applications in biocatalysis, metabolic engineering, and drug discovery, yet it remains a complex and resource-intensive task. Large Language Models (LLMs) have recently demonstrated remarkable success in various scientific domains, e.g., through their ability to generalize knowledge, reason over complex structures, and leverage in-context learning strategies. In this study, we systematically evaluate the capability of LLMs, particularly the Llama-3.1 family (8B and 70B), across three core biochemical tasks: Enzyme Commission number prediction, forward synthesis, and retrosynthesis. We compare single-task and multitask learning strategies, employing parameter-efficient fine-tuning via LoRA adapters. Additionally, we assess performance across different data regimes to explore their adaptability in low-data settings. Our results demonstrate that fine-tuned LLMs capture biochemical knowledge, with multitask learning enhancing forward- and retrosynthesis predictions by leveraging shared enzymatic information. We also identify key limitations, for example challenges in hierarchical EC classification schemes, highlighting areas for further improvement in LLM-driven biochemical modeling.