🤖 AI Summary
Machine-learned interatomic potentials (MLIPs) critically depend on the scale and diversity of training data; however, large-scale datasets incur prohibitive computational costs, while small datasets often omit rare but critical atomic environments.
Method: We propose the first information-theoretic, atom-level data compression framework, formulating subset selection as a Minimum Set Cover (MSC) problem. By leveraging atom-centered environment descriptors and information entropy quantification, our approach efficiently preserves rare configurations and long-tail features in force distributions. Integrated into the open-source QUESTS toolkit, it supports outlier detection and high-dimensional redundancy reduction.
Results: Evaluated on GAP-20, TM23, and 64 ColabFit datasets, our method yields MLIPs with lower prediction errors at equivalent dataset sizes and significantly outperforms existing subsampling strategies in out-of-distribution generalization.
📝 Abstract
Machine learning interatomic potentials (MLIPs) balance high accuracy and lower costs compared to density functional theory calculations, but their performance often depends on the size and diversity of training datasets. Large datasets improve model accuracy and generalization but are computationally expensive to produce and train on, while smaller datasets risk discarding rare but important atomic environments and compromising MLIP accuracy/reliability. Here, we develop an information-theoretical framework to quantify the efficiency of dataset compression methods and propose an algorithm that maximizes this efficiency. By framing atomistic dataset compression as an instance of the minimum set cover (MSC) problem over atom-centered environments, our method identifies the smallest subset of structures that contains as much information as possible from the original dataset while pruning redundant information. The approach is extensively demonstrated on the GAP-20 and TM23 datasets, and validated on 64 varied datasets from the ColabFit repository. Across all cases, MSC consistently retains outliers, preserves dataset diversity, and reproduces the long-tail distributions of forces even at high compression rates, outperforming other subsampling methods. Furthermore, MLIPs trained on MSC-compressed datasets exhibit reduced error for out-of-distribution data even in low-data regimes. We explain these results using an outlier analysis and show that such quantitative conclusions could not be achieved with conventional dimensionality reduction methods. The algorithm is implemented in the open-source QUESTS package and can be used for several tasks in atomistic modeling, from data subsampling, outlier detection, and training improved MLIPs at a lower cost.