🤖 AI Summary
Universal machine learning interatomic potentials (uMLIPs) exhibit inconsistent latent-space representations across models and lack a clear understanding of how chemical information is compressed into latent features. Method: We propose a novel, order-wise cumulant–based paradigm for atom-to-structure-level feature compression and introduce reconstruction error as a unified metric to quantitatively assess how training data composition, loss functions, and optimization procedures affect latent feature encoding. Contribution/Results: Despite comparable prediction accuracy, mainstream uMLIPs yield markedly heterogeneous latent spaces. Latent representational capacity is jointly governed by data distribution and optimization strategy; fine-tuning preserves pretraining biases. Crucially, the proposed structure-level features effectively encode local environmental variations, significantly enhancing cross-system generalizability. This work establishes a principled framework for interpreting and improving the chemical interpretability and transferability of uMLIP latent representations.
📝 Abstract
The past few years have seen the development of ``universal'' machine-learning interatomic potentials (uMLIPs) capable of approximating the ground-state potential energy surface across a wide range of chemical structures and compositions with reasonable accuracy. While these models differ in the architecture and the dataset used, they share the ability to compress a staggering amount of chemical information into descriptive latent features. Herein, we systematically analyze what the different uMLIPs have learned by quantitatively assessing the relative information content of their latent features with feature reconstruction errors as metrics, and observing how the trends are affected by the choice of training set and training protocol. We find that the uMLIPs encode chemical space in significantly distinct ways, with substantial cross-model feature reconstruction errors. When variants of the same model architecture are considered, trends become dependent on the dataset, target, and training protocol of choice. We also observe that fine-tuning of a uMLIP retains a strong pre-training bias in the latent features. Finally, we discuss how atom-level features, which are directly output by MLIPs, can be compressed into global structure-level features via concatenation of progressive cumulants, each adding significantly new information about the variability across the atomic environments within a given system.