🤖 AI Summary
Existing generative model evaluation metrics inadequately balance structural fidelity and novelty, leading to inaccurate characterization of true generative performance. To address this, we propose Transport Novelty Distance (TNovD), the first distribution-level evaluation metric grounded in optimal transport theory and integrated with contrastive learning representations, explicitly decoupling quality from memorization behavior. TNovD jointly leverages graph neural network–based structural representations, optimal transport distance computation, and threshold-driven coupled decomposition, enabling cross-modal generalization across images, molecules, and crystal structures. On the MP20 and WBM benchmark datasets, TNovD significantly outperforms baselines—including FID and Novelty—by precisely identifying low-fidelity and memorized samples. Furthermore, it enables systematic, large-scale benchmarking of state-of-the-art materials generation models, offering a principled, interpretable, and scalable assessment framework.
📝 Abstract
Recent advances in generative machine learning have opened new possibilities for the discovery and design of novel materials. However, as these models become more sophisticated, the need for rigorous and meaningful evaluation metrics has grown. Existing evaluation approaches often fail to capture both the quality and novelty of generated structures, limiting our ability to assess true generative performance. In this paper, we introduce the Transport Novelty Distance (TNovD) to judge generative models used for materials discovery jointly by the quality and novelty of the generated materials. Based on ideas from Optimal Transport theory, TNovD uses a coupling between the features of the training and generated sets, which is refined into a quality and memorization regime by a threshold. The features are generated from crystal structures using a graph neural network that is trained to distinguish between materials, their augmented counterparts, and differently sized supercells using contrastive learning. We evaluate our proposed metric on typical toy experiments relevant for crystal structure prediction, including memorization, noise injection and lattice deformations. Additionally, we validate the TNovD on the MP20 validation set and the WBM substitution dataset, demonstrating that it is capable of detecting both memorized and low-quality material data. We also benchmark the performance of several popular material generative models. While introduced for materials, our TNovD framework is domain-agnostic and can be adapted for other areas, such as images and molecules.