🤖 AI Summary
This work addresses the open problem of determining mutation equivalence for affine-type $ ilde{D}_n$ quivers. We propose an end-to-end learning framework based on graph neural networks (GNNs), augmented with interpretability analysis—including feature attribution and latent-space structural decomposition—to uncover and formalize necessary and sufficient criteria for mutation equivalence in $ ilde{D}_n$ quivers. Crucially, the model spontaneously reconstructs the classical $D_n$-type invariants in its latent space, empirically validating its capacity to induce cluster-algebraic structure. The learned equivalence criterion is both mathematically interpretable—admitting rigorous algebraic characterization—and generalizable across varying $n$. To our knowledge, this constitutes the first data-driven, AI-based solution to mutation classification in abstract algebra, bridging deep learning with structural representation theory in cluster algebras.
📝 Abstract
Machine learning is becoming an increasingly valuable tool in mathematics, enabling one to identify subtle patterns across collections of examples so vast that they would be impossible for a single researcher to feasibly review and analyze. In this work, we use graph neural networks to investigate quiver mutation -- an operation that transforms one quiver (or directed multigraph) into another -- which is central to the theory of cluster algebras with deep connections to geometry, topology, and physics. In the study of cluster algebras, the question of mutation equivalence is of fundamental concern: given two quivers, can one efficiently determine if one quiver can be transformed into the other through a sequence of mutations? Currently, this question has only been resolved in specific cases. In this paper, we use graph neural networks and AI explainability techniques to discover mutation equivalence criteria for the previously unknown case of quivers of type $ ilde{D}_n$. Along the way, we also show that even without explicit training to do so, our model captures structure within its hidden representation that allows us to reconstruct known criteria from type $D_n$, adding to the growing evidence that modern machine learning models are capable of learning abstract and general rules from mathematical data.