Comparing the latent features of universal machine-learning interatomic potentials

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Universal machine learning interatomic potentials (uMLIPs) exhibit inconsistent latent-space representations across models and lack a clear understanding of how chemical information is compressed into latent features. Method: We propose a novel, order-wise cumulant–based paradigm for atom-to-structure-level feature compression and introduce reconstruction error as a unified metric to quantitatively assess how training data composition, loss functions, and optimization procedures affect latent feature encoding. Contribution/Results: Despite comparable prediction accuracy, mainstream uMLIPs yield markedly heterogeneous latent spaces. Latent representational capacity is jointly governed by data distribution and optimization strategy; fine-tuning preserves pretraining biases. Crucially, the proposed structure-level features effectively encode local environmental variations, significantly enhancing cross-system generalizability. This work establishes a principled framework for interpreting and improving the chemical interpretability and transferability of uMLIP latent representations.

Technology Category

Application Category

📝 Abstract
The past few years have seen the development of ``universal'' machine-learning interatomic potentials (uMLIPs) capable of approximating the ground-state potential energy surface across a wide range of chemical structures and compositions with reasonable accuracy. While these models differ in the architecture and the dataset used, they share the ability to compress a staggering amount of chemical information into descriptive latent features. Herein, we systematically analyze what the different uMLIPs have learned by quantitatively assessing the relative information content of their latent features with feature reconstruction errors as metrics, and observing how the trends are affected by the choice of training set and training protocol. We find that the uMLIPs encode chemical space in significantly distinct ways, with substantial cross-model feature reconstruction errors. When variants of the same model architecture are considered, trends become dependent on the dataset, target, and training protocol of choice. We also observe that fine-tuning of a uMLIP retains a strong pre-training bias in the latent features. Finally, we discuss how atom-level features, which are directly output by MLIPs, can be compressed into global structure-level features via concatenation of progressive cumulants, each adding significantly new information about the variability across the atomic environments within a given system.
Problem

Research questions and friction points this paper is trying to address.

Analyzes latent feature differences in universal machine-learning interatomic potentials
Evaluates how training data and protocols affect chemical space encoding
Compresses atom-level features into global structure-level descriptors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing latent features of universal machine-learning interatomic potentials
Quantitatively assessing information content via reconstruction errors
Compressing atom-level features into global structure-level descriptors
🔎 Similar Papers
No similar papers found.
S
Sofiia Chorna
Laboratory of Computational Science and Modeling, Institut des Matériaux, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
D
Davide Tisi
Laboratory of Computational Science and Modeling, Institut des Matériaux, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
C
Cesare Malosso
Laboratory of Computational Science and Modeling, Institut des Matériaux, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
W
Wei Bin How
Laboratory of Computational Science and Modeling, Institut des Matériaux, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
Michele Ceriotti
Michele Ceriotti
Professor at EPFL, Institute of Materials
Atomic-scale modelingMachine learningMaterials scienceStatistical mechanicsPhysical
S
Sanggyu Chong
Laboratory of Computational Science and Modeling, Institut des Matériaux, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland