Pushing the limits of unconstrained machine-learned interatomic potentials

📅 2026-01-22
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether machine learning interatomic potentials (MLIPs) trained without explicit enforcement of physical constraints can outperform traditional models that strictly adhere to physical laws under large-scale training regimes. We propose a novel paradigm that forgoes hard-coded physical symmetries and conservation laws during training, instead leveraging massive datasets and highly parameterized architectures to enhance representational capacity, while restoring essential physical consistency through lightweight post-processing at inference time. Experimental results demonstrate that this unconstrained MLIP achieves significantly higher prediction accuracy than constrained counterparts in static simulation tasks such as geometry optimization and lattice dynamics, all while maintaining superior computational efficiency. These findings validate the effectiveness and practicality of a “large data plus inference correction” approach for atomic-scale modeling.

Technology Category

Application Category

📝 Abstract
Machine-learned interatomic potentials (MLIPs) are increasingly used to replace computationally demanding electronic-structure calculations to model matter at the atomic scale. The most commonly used model architectures are constrained to fulfill a number of physical laws exactly, from geometric symmetries to energy conservation. Evidence is mounting that relaxing some of these constraints can be beneficial to the efficiency and (somewhat surprisingly) accuracy of MLIPs, even though care should be taken to avoid qualitative failures associated with the breaking of physical symmetries. Given the recent trend of \emph{scaling up} models to larger numbers of parameters and training samples, a very important question is how unconstrained MLIPs behave in this limit. Here we investigate this issue, showing that -- when trained on large datasets -- unconstrained models can be superior in accuracy and speed when compared to physically constrained models. We assess these models both in terms of benchmark accuracy and in terms of usability in practical scenarios, focusing on static simulation workflows such as geometry optimization and lattice dynamics. We conclude that accurate unconstrained models can be applied with confidence, especially since simple inference-time modifications can be used to recover observables that are consistent with the relevant physical symmetries.
Problem

Research questions and friction points this paper is trying to address.

machine-learned interatomic potentials
unconstrained models
physical symmetries
scaling up
accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

unconstrained machine-learned interatomic potentials
scaling up
physical symmetries
inference-time modifications
accuracy-efficiency trade-off
🔎 Similar Papers
No similar papers found.
F
Filippo Bigi
Laboratory of Computational Science and Modeling, Institut des Matériaux, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
P
Paolo Pegolo
Laboratory of Computational Science and Modeling, Institut des Matériaux, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
A
Arslan Mazitov
Laboratory of Computational Science and Modeling, Institut des Matériaux, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
Michele Ceriotti
Michele Ceriotti
Professor at EPFL, Institute of Materials
Atomic-scale modelingMachine learningMaterials scienceStatistical mechanicsPhysical