(How) Can Transformers Predict Pseudo-Random Numbers?

๐Ÿ“… 2025-02-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates whether Transformers can generalize to learn pseudorandom number sequences generated by Linear Congruential Generators (LCGs), particularly performing in-context prediction on unseen moduli $m$ and parameters $(a,c)$. Methodologically, the study reveals that Transformers implicitly construct algorithmic structures: under fixed $m$, they learn digit representations and modular factorization via embedding decomposition and attention; under unknown $m$, they adopt a two-stage inferenceโ€”first estimating $m$ from context, then leveraging its prime factorization for prediction. Experiments achieve high-accuracy prediction on $m = 2^{32}$ (fixed modulus) and $m_{ ext{test}} = 2^{16}$ (unseen modulus); a sharp performance transition occurs at depth 3; context length scales sublinearly with $m$. This is the first demonstration that Transformers can implicitly acquire and execute number-theoretic algorithms, significantly extending their capacity to model deterministic chaotic sequences.

Technology Category

Application Category

๐Ÿ“ Abstract
Transformers excel at discovering patterns in sequential data, yet their fundamental limitations and learning mechanisms remain crucial topics of investigation. In this paper, we study the ability of Transformers to learn pseudo-random number sequences from linear congruential generators (LCGs), defined by the recurrence relation $x_{t+1} = a x_t + c ;mathrm{mod}; m$. Our analysis reveals that with sufficient architectural capacity and training data variety, Transformers can perform in-context prediction of LCG sequences with unseen moduli ($m$) and parameters ($a,c$). Through analysis of embedding layers and attention patterns, we uncover how Transformers develop algorithmic structures to learn these sequences in two scenarios of increasing complexity. First, we analyze how Transformers learn LCG sequences with unseen ($a, c$) but fixed modulus, and we demonstrate successful learning up to $m = 2^{32}$. Our analysis reveals that models learn to factorize the modulus and utilize digit-wise number representations to make sequential predictions. In the second, more challenging scenario of unseen moduli, we show that Transformers can generalize to unseen moduli up to $m_{ ext{test}} = 2^{16}$. In this case, the model employs a two-step strategy: first estimating the unknown modulus from the context, then utilizing prime factorizations to generate predictions. For this task, we observe a sharp transition in the accuracy at a critical depth $=3$. We also find that the number of in-context sequence elements needed to reach high accuracy scales sublinearly with the modulus.
Problem

Research questions and friction points this paper is trying to address.

Transformers predict pseudo-random numbers
Learn LCG sequences with unseen moduli
Develop algorithmic structures for sequence prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformers learn LCG sequences effectively.
Models factorize modulus for prediction accuracy.
Transformers generalize to unseen moduli strategically.
๐Ÿ”Ž Similar Papers
No similar papers found.
Tao Tao
Tao Tao
University of Maryland
Darshil Doshi
Darshil Doshi
DSAI postdoctoral Fellow @ Johns Hopkins University
Deep Learning and AICondensed Matter Theory
D
Dayal Singh Kalra
Department of Computer Science, University of Maryland, College Park, USA
Tianyu He
Tianyu He
Microsoft Research
machine learninggenerative modelsworld models
M
M. Barkeshli
Department of Physics, University of Maryland, College Park, USA; Joint Quantum Institute, University of Maryland, College Park, USA