🤖 AI Summary
This work proposes NNEinFact, the first approximately universal framework for non-negative tensor decomposition based on Einstein summation (Einsum). Addressing the limited generality, usability, and optimization challenges of existing non-negative tensor factorization tools under non-negativity constraints, NNEinFact automatically interprets arbitrary non-negative decomposition structures expressible as tensor contractions via user-defined Einsum expressions. The framework supports a broad class of (α,β)-divergence losses and handles missing data seamlessly. Leveraging an efficient multiplicative update algorithm, NNEinFact substantially outperforms standard models on real-world datasets: it achieves over 37% improvement in link prediction tasks, reduces test loss to less than half that of gradient-based methods, and accelerates convergence by up to 90×.
📝 Abstract
Despite the ubiquity of multiway data across scientific domains, there are few user-friendly tools that fit tailored nonnegative tensor factorizations. Researchers may use gradient-based automatic differentiation (which often struggles in nonnegative settings), choose between a limited set of methods with mature implementations, or implement their own model from scratch. As an alternative, we introduce NNEinFact, an einsum-based multiplicative update algorithm that fits any nonnegative tensor factorization expressible as a tensor contraction by minimizing one of many user-specified loss functions (including the $(\alpha,\beta)$-divergence). To use NNEinFact, the researcher simply specifies their model with a string. NNEinFact converges to a stationary point of the loss, supports missing data, and fits to tensors with hundreds of millions of entries in seconds. Empirically, NNEinFact fits custom models which outperform standard ones in heldout prediction tasks on real-world tensor data by over $37\%$ and attains less than half the test loss of gradient-based methods while converging up to 90 times faster.