Improving Equivariant Networks with Probabilistic Symmetry Breaking

📅 2025-03-27
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Equivariant networks strictly preserve input symmetries, rendering them ill-suited for generative tasks requiring *active symmetry breaking*—e.g., reconstructing asymmetric structures from highly symmetric latent representations. To address this, we establish the first necessary and sufficient representation theorem for equivariant conditional distributions and propose SymPE: a method that achieves *controllable symmetry breaking* via learnable stochastic normalized positional encodings, while preserving the group-equivariant inductive bias. SymPE unifies probabilistic symmetry breaking, positional encoding, and equivariant graph neural networks, and naturally integrates with diffusion-based generative frameworks. Empirically, it significantly improves performance on graph diffusion modeling, graph autoencoding, and lattice spin system generation. Theoretically, we prove that SymPE’s generalization bound is strictly superior to that of conventional equivariant networks.

Technology Category

Application Category

📝 Abstract
Equivariance encodes known symmetries into neural networks, often enhancing generalization. However, equivariant networks cannot break symmetries: the output of an equivariant network must, by definition, have at least the same self-symmetries as the input. This poses an important problem, both (1) for prediction tasks on domains where self-symmetries are common, and (2) for generative models, which must break symmetries in order to reconstruct from highly symmetric latent spaces. This fundamental limitation can be addressed by considering equivariant conditional distributions, instead of equivariant functions. We present novel theoretical results that establish necessary and sufficient conditions for representing such distributions. Concretely, this representation provides a practical framework for breaking symmetries in any equivariant network via randomized canonicalization. Our method, SymPE (Symmetry-breaking Positional Encodings), admits a simple interpretation in terms of positional encodings. This approach expands the representational power of equivariant networks while retaining the inductive bias of symmetry, which we justify through generalization bounds. Experimental results demonstrate that SymPE significantly improves performance of group-equivariant and graph neural networks across diffusion models for graphs, graph autoencoders, and lattice spin system modeling.
Problem

Research questions and friction points this paper is trying to address.

Equivariant networks cannot break input symmetries
Symmetry breaking is needed for generative models
Handling self-symmetries in prediction tasks is challenging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic symmetry breaking via equivariant distributions
Randomized canonicalization for symmetry breaking
Symmetry-breaking Positional Encodings (SymPE)
🔎 Similar Papers
No similar papers found.
Hannah Lawrence
Hannah Lawrence
PhD Student, Massachusetts Institute of Technology
Equivariant deep learningtheory of machine learningFourier algorithms
V
Vasco Portilheiro
Gatsby Computational Neuroscience Unit, UCL
Y
Yan Zhang
Samsung – SAIT AI Lab, Montreal, Mila – Quebec Artficial Intelligence Institute
S
S. Kaba
Mila – Quebec Artficial Intelligence Institute, McGill University