🤖 AI Summary
Reinforcement learning (RL) for SMILES-based molecular generation suffers from catastrophic forgetting during fine-tuning, leading to a sharp decline in molecular validity.
Method: We propose a novel RL framework integrating real-time syntactic and chemical rule validation. Its core innovation is Partial SMILES Validation—a mechanism that, at each autoregressive generation step, concurrently evaluates the structural validity of the current token and all feasible subsequent token branches, enabling early pruning of invalid trajectories and enhancing exploration robustness. Built upon the PPO algorithm, the framework incorporates LLM-driven molecular representation and domain-knowledge-guided reward shaping.
Results: On the PMO and GuacaMol benchmarks, our method reduces the invalid molecule rate to <0.5%, while matching REINVENT’s optimization performance and scaffold diversity—achieving unprecedented balance between validity preservation and efficient chemical space exploration.
📝 Abstract
SMILES-based molecule generation has emerged as a powerful approach in drug discovery. Deep reinforcement learning (RL) using large language model (LLM) has been incorporated into the molecule generation process to achieve high matching score in term of likelihood of desired molecule candidates. However, a critical challenge in this approach is catastrophic forgetting during the RL phase, where knowledge such as molecule validity, which often exceeds 99% during pretraining, significantly deteriorates. Current RL algorithms applied in drug discovery, such as REINVENT, use prior models as anchors to retian pretraining knowledge, but these methods lack robust exploration mechanisms. To address these issues, we propose Partial SMILES Validation-PPO (PSV-PPO), a novel RL algorithm that incorporates real-time partial SMILES validation to prevent catastrophic forgetting while encouraging exploration. Unlike traditional RL approaches that validate molecule structures only after generating entire sequences, PSV-PPO performs stepwise validation at each auto-regressive step, evaluating not only the selected token candidate but also all potential branches stemming from the prior partial sequence. This enables early detection of invalid partial SMILES across all potential paths. As a result, PSV-PPO maintains high validity rates even during aggressive exploration of the vast chemical space. Our experiments on the PMO and GuacaMol benchmark datasets demonstrate that PSV-PPO significantly reduces the number of invalid generated structures while maintaining competitive exploration and optimization performance. While our work primarily focuses on maintaining validity, the framework of PSV-PPO can be extended in future research to incorporate additional forms of valuable domain knowledge, further enhancing reinforcement learning applications in drug discovery.