🤖 AI Summary
This paper addresses the classical problem in combinatorial representation theory of determining the vanishing of Kronecker coefficients for the symmetric group. It introduces, for the first time, interpretable machine learning methods to this task. The approach features novel feature engineering based on *b*-loadings and principal component embedding, enabling interpretable models—including logistic regression and random forests—as well as a custom-designed Transformer architecture tailored specifically for Kronecker coefficient prediction. The key contributions are threefold: (1) the first application of interpretable ML to Kronecker coefficient vanishing prediction; (2) derivation of an explicit, mathematically interpretable decision function grounded in *b*-loadings; and (3) a Transformer model achieving 99.2% accuracy on standard benchmarks, with the overall framework attaining an average precision of 83%. This work bridges machine learning and representation theory, establishing a new paradigm for the learnability of combinatorial invariants.
📝 Abstract
We analyze the saliency of neural networks and employ interpretable machine learning models to predict whether the Kronecker coefficients of the symmetric group are zero or not. Our models use triples of partitions as input features, as well as b-loadings derived from the principal component of an embedding that captures the differences between partitions. Across all approaches, we achieve an accuracy of approximately 83% and derive explicit formulas for a decision function in terms of b-loadings. Additionally, we develop transformer-based models for prediction, achieving the highest reported accuracy of over 99%.