🤖 AI Summary
This work addresses the challenge of modeling alpha-equivalence—the semantic invariance of formal languages under variable renaming—in neural representations. To support dynamic vocabulary expansion and generalization to novel symbols, we propose commutative token embeddings. Methodologically, we design a two-component embedding scheme: a shared semantic component ensures alpha-equivalence, while a stochastic discriminative component preserves symbol distinguishability. We introduce alpha-covariance as a metric to quantify model robustness to renaming transformations. Our architecture integrates a Transformer encoder-decoder, alpha-renaming data augmentation, and a custom loss function enforcing equivalence constraints. Experiments on temporal logic formula solving and scalable vocabulary copy tasks demonstrate substantial improvements in out-of-distribution generalization. Results validate the method’s strong inductive bias toward alpha-equivalence and its effectiveness in cross-symbol transfer, enabling robust reasoning over syntactically distinct but semantically equivalent expressions.
📝 Abstract
We propose a novel approach for learning interchangeable tokens in language models to obtain an extendable vocabulary that can generalize to new tokens. Our method addresses alpha-equivalence, the principle that renaming bound variables preserves semantics. This property arises in many formal languages such as temporal logics, where all proposition symbols represent the same concept but remain distinct. To handle such tokens, we develop a dual-part embedding approach. The first part is shared across all interchangeable tokens, enforcing that they represent the same core concept. The second part is randomly generated for each token, enabling distinguishability. As a baseline, we consider a simpler approach that uses alpha-renaming for data augmentation. We also present alpha-covariance, a metric for measuring robustness against alpha-conversions. When evaluated in a Transformer encoder-decoder model for solving linear temporal logic formulae and copying with extendable vocabulary, our method demonstrates promising generalization capabilities as well as a favorable inductive bias for alpha-equivalence.