🤖 AI Summary
This work investigates the statistical convergence behavior of attention mechanisms in large language models under ultra-long contexts (i.e., asymptotically infinite sequence lengths). To overcome limitations of conventional analyses in characterizing infinite-length limits, we introduce *token-sample complexity*, a novel metric quantifying the convergence rate of *n*-token attention outputs toward their infinite-length limit. Theoretically, we establish, for the first time, a dual-rate framework linking uniform convergence of attention maps to moment convergence of transformed token distributions—revealing that convergence speed is jointly governed by attention’s geometric structure and token spectral properties, and identifying a distinctive logarithmic convergence phenomenon under the Hardmax limit. Leveraging probabilistic inequalities, sub-Gaussian theory, and asymptotic analysis of Softmax/Hardmax, we derive tight bounds: *O*(1/√*n*) and *O*(1/*n*^β) with β < 1/2, explicitly characterizing exponential and polynomial constant dependencies. Empirical validation on BERT and Gaussian-synthetic data confirms predicted convergence orders and scaling laws.
📝 Abstract
As context windows in large language models continue to expand, it is essential to characterize how attention behaves at extreme sequence lengths. We introduce token-sample complexity: the rate at which attention computed on $n$ tokens converges to its infinite-token limit. We estimate finite-$n$ convergence bounds at two levels: pointwise uniform convergence of the attention map, and convergence of moments for the transformed token distribution. For compactly supported (and more generally sub-Gaussian) distributions, our first result shows that the attention map converges uniformly on a ball of radius $R$ at rate $C(R)/sqrt{n}$, where $C(R)$ grows exponentially with $R$. For large $R$, this estimate loses practical value, and our second result addresses this issue by establishing convergence rates for the moments of the transformed distribution (the token output of the attention layer). In this case, the rate is $C'(R)/n^β$ with $β< frac{1}{2}$, and $C'(R)$ depends polynomially on the size of the support of the distribution. The exponent $β$ depends on the attention geometry and the spectral properties of the tokens distribution. We also examine the regime in which the attention parameter tends to infinity and the softmax approaches a hardmax, and in this setting, we establish a logarithmic rate of convergence. Experiments on synthetic Gaussian data and real BERT models on Wikipedia text confirm our predictions.