๐ค AI Summary
This work investigates whether Transformers can generalize to learn pseudorandom number sequences generated by Linear Congruential Generators (LCGs), particularly performing in-context prediction on unseen moduli $m$ and parameters $(a,c)$. Methodologically, the study reveals that Transformers implicitly construct algorithmic structures: under fixed $m$, they learn digit representations and modular factorization via embedding decomposition and attention; under unknown $m$, they adopt a two-stage inferenceโfirst estimating $m$ from context, then leveraging its prime factorization for prediction. Experiments achieve high-accuracy prediction on $m = 2^{32}$ (fixed modulus) and $m_{ ext{test}} = 2^{16}$ (unseen modulus); a sharp performance transition occurs at depth 3; context length scales sublinearly with $m$. This is the first demonstration that Transformers can implicitly acquire and execute number-theoretic algorithms, significantly extending their capacity to model deterministic chaotic sequences.
๐ Abstract
Transformers excel at discovering patterns in sequential data, yet their fundamental limitations and learning mechanisms remain crucial topics of investigation. In this paper, we study the ability of Transformers to learn pseudo-random number sequences from linear congruential generators (LCGs), defined by the recurrence relation $x_{t+1} = a x_t + c ;mathrm{mod}; m$. Our analysis reveals that with sufficient architectural capacity and training data variety, Transformers can perform in-context prediction of LCG sequences with unseen moduli ($m$) and parameters ($a,c$). Through analysis of embedding layers and attention patterns, we uncover how Transformers develop algorithmic structures to learn these sequences in two scenarios of increasing complexity. First, we analyze how Transformers learn LCG sequences with unseen ($a, c$) but fixed modulus, and we demonstrate successful learning up to $m = 2^{32}$. Our analysis reveals that models learn to factorize the modulus and utilize digit-wise number representations to make sequential predictions. In the second, more challenging scenario of unseen moduli, we show that Transformers can generalize to unseen moduli up to $m_{ ext{test}} = 2^{16}$. In this case, the model employs a two-step strategy: first estimating the unknown modulus from the context, then utilizing prime factorizations to generate predictions. For this task, we observe a sharp transition in the accuracy at a critical depth $=3$. We also find that the number of in-context sequence elements needed to reach high accuracy scales sublinearly with the modulus.