Training Text-to-Molecule Models with Context-Aware Tokenization

📅 2025-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-molecule generation models rely on atom-level tokenization, limiting their ability to capture global molecular structure and semantic context. Method: We propose Context-Aware Molecular T5 (CAMT5), the first model employing substructure-level tokenization—representing molecules as sequences of chemically meaningful substructures—and introduce an importance-weighted training strategy to enhance modeling of semantically critical units (e.g., pharmacophores). CAMT5 further integrates a pre-trained language model architecture with an ensemble learning framework to improve sequence generation robustness. Contribution/Results: Experiments demonstrate that CAMT5 significantly outperforms state-of-the-art methods across multiple text-to-molecule tasks. Notably, it achieves superior performance using only 2% of the training tokens required by prior approaches, striking an optimal balance between generation accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Recently, text-to-molecule models have shown great potential across various chemical applications, e.g., drug-discovery. These models adapt language models to molecular data by representing molecules as sequences of atoms. However, they rely on atom-level tokenizations, which primarily focus on modeling local connectivity, thereby limiting the ability of models to capture the global structural context within molecules. To tackle this issue, we propose a novel text-to-molecule model, coined Context-Aware Molecular T5 (CAMT5). Inspired by the significance of the substructure-level contexts in understanding molecule structures, e.g., ring systems, we introduce substructure-level tokenization for text-to-molecule models. Building on our tokenization scheme, we develop an importance-based training strategy that prioritizes key substructures, enabling CAMT5 to better capture the molecular semantics. Extensive experiments verify the superiority of CAMT5 in various text-to-molecule generation tasks. Intriguingly, we find that CAMT5 outperforms the state-of-the-art methods using only 2% of training tokens. In addition, we propose a simple yet effective ensemble strategy that aggregates the outputs of text-to-molecule models to further boost the generation performance. Code is available at https://github.com/Songhyeontae/CAMT5.git.
Problem

Research questions and friction points this paper is trying to address.

Addressing limitations of atom-level tokenization in text-to-molecule models
Improving global structural context capture in molecular representations
Enhancing molecular semantics understanding through substructure-level tokenization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Substructure-level tokenization for molecule representation
Importance-based training prioritizing key molecular substructures
Ensemble strategy aggregating outputs to boost performance
🔎 Similar Papers
No similar papers found.