Mol-LLM: Generalist Molecular LLM with Improved Graph Utilization

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
General-purpose large language models (LLMs) lack intrinsic understanding of molecular structure, rendering them unable to reliably distinguish valid molecules from structurally invalid negative samples—severely limiting their generalization in molecular tasks. Method: We propose the first truly general-purpose molecular LLM, introducing a novel multimodal instruction-tuning paradigm that jointly integrates SMILES strings and molecular graphs, augmented with graph-structure preference optimization. Our approach incorporates contrastive graph preference learning, a structure-aware loss function, and multi-task instruction alignment to endow the model with inherent capabilities for molecular topology reasoning and chemical validity assessment. Contribution/Results: The model achieves state-of-the-art performance across major molecular benchmarks, surpassing general-purpose LLMs and matching or exceeding specialized molecular models. Notably, it demonstrates superior cross-task generalization—particularly in reaction prediction—highlighting its robustness and versatility for diverse molecular AI applications.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have motivated the development of general LLMs for molecular tasks. While several studies have demonstrated that fine-tuned LLMs can achieve impressive benchmark performances, they are far from genuine generalist molecular LLMs due to a lack of fundamental understanding of molecular structure. Specifically, when given molecular task instructions, LLMs trained with naive next-token prediction training assign similar likelihood scores to both original and negatively corrupted molecules, revealing their lack of molecular structure understanding that is crucial for reliable and general molecular LLMs. To overcome this limitation and obtain a true generalist molecular LLM, we introduce a novel multi-modal training method based on a thorough multi-modal instruction tuning as well as a molecular structure preference optimization between chosen and rejected graphs. On various molecular benchmarks, the proposed generalist molecular LLM, called Mol-LLM, achieves state-of-the-art performances among generalist LLMs on most tasks, at the same time, surpassing or comparable to state-of-the-art specialist LLMs. Moreover, Mol-LLM also shows superior generalization performances in reaction prediction tasks, demonstrating the effect of the molecular structure understanding for generalization perspective.
Problem

Research questions and friction points this paper is trying to address.

Enhances molecular structure understanding in LLMs.
Develops a generalist molecular LLM via multi-modal training.
Improves generalization in molecular reaction prediction tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal instruction tuning
Molecular structure preference optimization
Superior generalization in reaction prediction
Chanhui Lee
Chanhui Lee
GIST
Computer VisionAdversarial Attack
Y
Yuheon Song
Department of Artificial Intelligence, Korea University, Seoul, Korea
Y
YongJun Jeong
Department of Artificial Intelligence, Korea University, Seoul, Korea
H
Hanbum Ko
Department of Artificial Intelligence, Korea University, Seoul, Korea
R
Rodrigo Hormazabal
LG AI Research, Seoul, Korea
S
Sehui Han
LG AI Research, Seoul, Korea
Kyunghoon Bae
Kyunghoon Bae
LG AI Research
Generative AIComputer VisionNatural Language ProcessingContinual LearningExplainable AI
S
Sungbin Lim
Department of Statistics, Korea University, Seoul, Korea
S
Sungwoon Kim
Department of Artificial Intelligence, Korea University, Seoul, Korea