🤖 AI Summary
This work addresses the curse of dimensionality faced by Transformers in high-dimensional function approximation. From the perspective of approximation theory, it provides the first rigorous proof that Transformers can overcome this curse for Hölder-continuous functions with exponent β. Methodologically, it constructs a context-free theoretical framework based on the Kolmogorov–Arnold representation theorem and designs a minimal architecture comprising only a single-head Softmax self-attention layer and several feed-forward layers. Key contributions include: (1) feed-forward layer width reduced to a constant—dependent on activation choice (e.g., floor or ReLU); (2) total depth improved to O(log(1/ε)), significantly better than prior results; and (3) width upper bound tightened to O(ε⁻²⁄ᵝ log(1/ε)) for approximation accuracy ε. This is the first work to establish optimal approximation rates for Transformers without contextual assumptions, rigorously characterizing their expressive power.
📝 Abstract
The Transformer model is widely used in various application areas of machine learning, such as natural language processing. This paper investigates the approximation of the H""older continuous function class $mathcal{H}_{Q}^{eta}left([0,1]^{d imes n},mathbb{R}^{d imes n}
ight)$ by Transformers and constructs several Transformers that can overcome the curse of dimensionality. These Transformers consist of one self-attention layer with one head and the softmax function as the activation function, along with several feedforward layers. For example, to achieve an approximation accuracy of $epsilon$, if the activation functions of the feedforward layers in the Transformer are ReLU and floor, only $mathcal{O}left(logfrac{1}{epsilon}
ight)$ layers of feedforward layers are needed, with widths of these layers not exceeding $mathcal{O}left(frac{1}{epsilon^{2/eta}}logfrac{1}{epsilon}
ight)$. If other activation functions are allowed in the feedforward layers, the width of the feedforward layers can be further reduced to a constant. These results demonstrate that Transformers have a strong expressive capability. The construction in this paper is based on the Kolmogorov-Arnold Representation Theorem and does not require the concept of contextual mapping, hence our proof is more intuitively clear compared to previous Transformer approximation works. Additionally, the translation technique proposed in this paper helps to apply the previous approximation results of feedforward neural networks to Transformer research.