🤖 AI Summary
Equivariant networks strictly preserve input symmetries, rendering them ill-suited for generative tasks requiring *active symmetry breaking*—e.g., reconstructing asymmetric structures from highly symmetric latent representations. To address this, we establish the first necessary and sufficient representation theorem for equivariant conditional distributions and propose SymPE: a method that achieves *controllable symmetry breaking* via learnable stochastic normalized positional encodings, while preserving the group-equivariant inductive bias. SymPE unifies probabilistic symmetry breaking, positional encoding, and equivariant graph neural networks, and naturally integrates with diffusion-based generative frameworks. Empirically, it significantly improves performance on graph diffusion modeling, graph autoencoding, and lattice spin system generation. Theoretically, we prove that SymPE’s generalization bound is strictly superior to that of conventional equivariant networks.
📝 Abstract
Equivariance encodes known symmetries into neural networks, often enhancing generalization. However, equivariant networks cannot break symmetries: the output of an equivariant network must, by definition, have at least the same self-symmetries as the input. This poses an important problem, both (1) for prediction tasks on domains where self-symmetries are common, and (2) for generative models, which must break symmetries in order to reconstruct from highly symmetric latent spaces. This fundamental limitation can be addressed by considering equivariant conditional distributions, instead of equivariant functions. We present novel theoretical results that establish necessary and sufficient conditions for representing such distributions. Concretely, this representation provides a practical framework for breaking symmetries in any equivariant network via randomized canonicalization. Our method, SymPE (Symmetry-breaking Positional Encodings), admits a simple interpretation in terms of positional encodings. This approach expands the representational power of equivariant networks while retaining the inductive bias of symmetry, which we justify through generalization bounds. Experimental results demonstrate that SymPE significantly improves performance of group-equivariant and graph neural networks across diffusion models for graphs, graph autoencoders, and lattice spin system modeling.