ModuLM: Enabling Modular and Multimodal Molecular Relational Learning with Large Language Models

πŸ“… 2025-06-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current molecular relation learning (MRL) suffers from three key challenges: fragmented model architectures, rigid input modality constraints, and inconsistent, non-uniform evaluation protocols. To address these, we propose MolLLMβ€”the first modular large language model (LLM) framework supporting multimodal molecular inputs (2D graphs and 3D conformations) with dynamic architecture switching. MolLLM decouples the encoder (8 Γ— 2D + 11 Γ— 3D variants), interaction layer (7 designs), and LLM backbone (7 models), enabling over 50,000 zero-redundancy composable configurations. It unifies molecular representation learning and relational reasoning within a single, extensible architecture. Empirically, MolLLM significantly improves model reusability and evaluation consistency across diverse MRL tasks, enabling fair, cross-architecture benchmarking. This work establishes a standardized, scalable LLM-MRL infrastructure for AI for Science.

Technology Category

Application Category

πŸ“ Abstract
Molecular Relational Learning (MRL) aims to understand interactions between molecular pairs, playing a critical role in advancing biochemical research. With the recent development of large language models (LLMs), a growing number of studies have explored the integration of MRL with LLMs and achieved promising results. However, the increasing availability of diverse LLMs and molecular structure encoders has significantly expanded the model space, presenting major challenges for benchmarking. Currently, there is no LLM framework that supports both flexible molecular input formats and dynamic architectural switching. To address these challenges, reduce redundant coding, and ensure fair model comparison, we propose ModuLM, a framework designed to support flexible LLM-based model construction and diverse molecular representations. ModuLM provides a rich suite of modular components, including 8 types of 2D molecular graph encoders, 11 types of 3D molecular conformation encoders, 7 types of interaction layers, and 7 mainstream LLM backbones. Owing to its highly flexible model assembly mechanism, ModuLM enables the dynamic construction of over 50,000 distinct model configurations. In addition, we provide comprehensive results to demonstrate the effectiveness of ModuLM in supporting LLM-based MRL tasks.
Problem

Research questions and friction points this paper is trying to address.

Lack of flexible LLM framework for molecular input formats
No support for dynamic architectural switching in MRL
Challenges in benchmarking diverse LLMs and encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework for flexible LLM-based model construction
Supports diverse 2D and 3D molecular encoders
Enables dynamic assembly of 50,000+ model configurations
πŸ”Ž Similar Papers
No similar papers found.
Z
Zhuo Chen
University of Science and Technology of China, China; Suzhou Institute for Advanced Research, USTC, China
Yizhen Zheng
Yizhen Zheng
PhD candidate, Monash University
AI4Drug DiscoveryLLMsGNNs
Huan Yee Koh
Huan Yee Koh
Monash University
AI for Drug DiscoveryTime SeriesNatural Language ProcessingLarge Language Models
Hongxin Xiang
Hongxin Xiang
Hunan University
Linjiang Chen
Linjiang Chen
USTC & Bham
W
Wenjie Du
University of Science and Technology of China, China; Suzhou Institute for Advanced Research, USTC, China
Y
Yang Wang
University of Science and Technology of China, China; Suzhou Institute for Advanced Research, USTC, China