How unconstrained machine-learning models learn physical symmetries

📅 2026-03-25
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
This work addresses the challenge of enforcing physical symmetries—such as rotational equivariance—in machine learning models without imposing explicit architectural constraints. The authors propose a general, architecture-agnostic approach that introduces a novel metric to quantify the degree of symmetry learning, employs spectral analysis to diagnose failure modes, and leverages targeted data augmentation to guide unconstrained Transformers—including graph neural networks and PointNet-style architectures—to progressively approximate equivariance across layers during training. Experiments demonstrate that injecting only the minimal necessary inductive bias substantially enhances physical fidelity, numerical stability, and predictive accuracy, while preserving the model’s expressive capacity.

Technology Category

Application Category

📝 Abstract
The requirement of generating predictions that exactly fulfill the fundamental symmetry of the corresponding physical quantities has profoundly shaped the development of machine-learning models for physical simulations. In many cases, models are built using constrained mathematical forms that ensure that symmetries are enforced exactly. However, unconstrained models that do not obey rotational symmetries are often found to have competitive performance, and to be able to \emph{learn} to a high level of accuracy an approximate equivariant behavior with a simple data augmentation strategy. In this paper, we introduce rigorous metrics to measure the symmetry content of the learned representations in such models, and assess the accuracy by which the outputs fulfill the equivariant condition. We apply these metrics to two unconstrained, transformer-based models operating on decorated point clouds (a graph neural network for atomistic simulations and a PointNet-style architecture for particle physics) to investigate how symmetry information is processed across architectural layers and is learned during training. Based on these insights, we establish a rigorous framework for diagnosing spectral failure modes in ML models. Enabled by this analysis, we demonstrate that one can achieve superior stability and accuracy by strategically injecting the minimum required inductive biases, preserving the high expressivity and scalability of unconstrained architectures while guaranteeing physical fidelity.
Problem

Research questions and friction points this paper is trying to address.

physical symmetries
equivariance
unconstrained models
machine learning
symmetry learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

symmetry learning
equivariance
unconstrained models
inductive bias
spectral diagnostics
🔎 Similar Papers
No similar papers found.
M
Michelangelo Domina
Laboratory of Computational Science and Modeling, Institut des MatĂ©riaux, École Polytechnique FĂ©dĂ©rale de Lausanne, Lausanne, Switzerland
J
Joseph William Abbott
Laboratory of Computational Science and Modeling, Institut des MatĂ©riaux, École Polytechnique FĂ©dĂ©rale de Lausanne, Lausanne, Switzerland
P
Paolo Pegolo
Laboratory of Computational Science and Modeling, Institut des MatĂ©riaux, École Polytechnique FĂ©dĂ©rale de Lausanne, Lausanne, Switzerland
F
Filippo Bigi
Laboratory of Computational Science and Modeling, Institut des MatĂ©riaux, École Polytechnique FĂ©dĂ©rale de Lausanne, Lausanne, Switzerland
Michele Ceriotti
Michele Ceriotti
Professor at EPFL, Institute of Materials
Atomic-scale modelingMachine learningMaterials scienceStatistical mechanicsPhysical