🤖 AI Summary
Conventional neural networks struggle to learn topological invariants due to a fundamental expressivity gap between their architecture and the underlying real-space topological structure.
Method: We propose a hybrid tensor network–neural network architecture that rigorously embeds real-space topological invariants—such as the Chern number—into the model design, complemented by rigorous trainability analysis and generalization assessment.
Contribution/Results: This work achieves the first exact, differentiable parametrization of topological invariants within a learnable model. It systematically uncovers a decoupling between model expressivity and trainability—a previously underexplored phenomenon. Empirical evaluation demonstrates that the hybrid architecture significantly outperforms mainstream deep learning models (e.g., CNNs and ResNets) on topological phase classification tasks, achieving over 15% higher accuracy while exhibiting strong generalization across lattice geometries and system sizes. Moreover, the model retains physical interpretability through its explicit topological encoding, establishing a new paradigm for physics-informed, interpretable machine learning.
📝 Abstract
Much attention has been devoted to the use of machine learning to approximate physical concepts. Yet, due to challenges in interpretability of machine learning techniques, the question of what physics machine learning models are able to learn remains open. Here we bridge the concept a physical quantity and its machine learning approximation in the context of the original application of neural networks in physics: topological phase classification. We construct a hybrid tensor-neural network object that exactly expresses real space topological invariant and rigorously assess its trainability and generalization. Specifically, we benchmark the accuracy and trainability of a tensor-neural network to multiple types of neural networks, thus exemplifying the differences in trainability and representational power. Our work highlights the challenges in learning topological invariants and constitutes a stepping stone towards more accurate and better generalizable machine learning representations in condensed matter physics.