🤖 AI Summary
This work addresses the lack of thermodynamic consistency in existing molecular foundation models and the limitations of traditional methods, which are restricted to single-property prediction and rely on small datasets. The authors propose the first multimodal framework that jointly leverages SMILES strings, molecular graphs, and 3D geometric structures, embedding thermodynamic equations as inductive biases to enable consistent multi-task prediction of nine thermophysical properties. The model incorporates a gated cross-modal attention mechanism, domain-constrained prediction heads, and a two-stage training strategy, supporting inference with missing modalities and enabling unsupervised recovery of thermodynamic parameters. Evaluated on a test set of 8,877 molecules, the model achieves an average R² of 0.716, outperforming ChemBERTa-2 across all tasks with 2,000-fold less training data, particularly excelling in temperature-dependent property prediction.
📝 Abstract
Predicting physicochemical properties across chemical space is vital for chemical engineering, drug discovery, and materials science. Current molecular foundation models lack thermodynamic consistency, while domain-informed approaches are limited to single properties and small datasets. We introduce MultiPUFFIN, a domain-constrained multimodal foundation model addressing both limitations simultaneously. MultiPUFFIN features: (i) an encoder fusing SMILES, graphs, and 3D geometries via gated cross-modal attention, alongside experimental condition and descriptor encoders; (ii) prediction heads embedding established correlations (e.g., Wagner, Andrade, van't Hoff, and Shomate equations) as inductive biases to ensure thermodynamic consistency; and (iii) a two-stage multi-task training strategy.Extending prior frameworks, MultiPUFFIN predicts nine thermophysical properties simultaneously. It is trained on a multi-source dataset of 37,968 unique molecules (40,904 rows). With roughly 35 million parameters, MultiPUFFIN achieves a mean $R^2 = 0.716$ on a challenging scaffold-split test set of 8,877 molecules. Compared to ChemBERTa-2 (pre-trained on 77 million molecules), MultiPUFFIN outperforms the fine-tuned baseline across all nine properties despite using 2000x fewer training molecules. Advantages are strikingly apparent for temperature-dependent properties, where ChemBERTa-2 lacks the architectural capacity to incorporate thermodynamic conditions.These results demonstrate that multimodal encoding and domain-informed biases substantially reduce data and compute requirements compared to brute-force pre-training. Furthermore, MultiPUFFIN handles missing modalities and recovers meaningful thermodynamic parameters without explicit supervision. Systematic ablation studies confirm the property-specific benefits of these domain-informed prediction heads.