🤖 AI Summary
Explicitly constructing combinatorial bijections remains a long-standing challenge in algebraic combinatorics, especially given the intractability of manually analyzing massive combinatorial datasets.
Method: We propose the first machine learning–driven, interpretable discovery framework for combinatorial bijections. Leveraging the attention mechanism of Transformer models, our approach analyzes paired combinatorial structures—such as Dyck paths—to uncover latent bijection patterns. We then introduce the Scaffolding Map algorithm, which systematically translates opaque attention patterns into verifiable, generalizable combinatorial mapping rules.
Contribution/Results: Our framework automatically derives a novel explicit construction of the zeta map directly from model attention—marking the first data-driven, mathematically rigorous derivation of this fundamental bijection. It overcomes the traditional reliance on human insight while preserving formal correctness, significantly enhancing both the efficiency and interpretability of discovering complex combinatorial bijections.
📝 Abstract
There is a large class of problems in algebraic combinatorics which can be distilled into the same challenge: construct an explicit combinatorial bijection. Traditionally, researchers have solved challenges like these by visually inspecting the data for patterns, formulating conjectures, and then proving them. But what is to be done if patterns fail to emerge until the data grows beyond human scale? In this paper, we propose a new workflow for discovering combinatorial bijections via machine learning. As a proof of concept, we train a transformer on paired Dyck paths and use its learned attention patterns to derive a new algorithmic description of the zeta map, which we call the extit{Scaffolding Map}.