🤖 AI Summary
This work investigates the optimal convergence rates of nonparametric regression in in-context learning over α-Hölder smooth function classes. The authors propose a Transformer-based in-context learning method that integrates kernel-weighted polynomial bases with a gradient descent mechanism to effectively approximate local polynomial estimators. Their approach achieves the minimax-optimal mean squared error rate of O(n⁻²ᵅ⁄⁽²ᵅ⁺ᵈ⁾), matching the theoretical lower bound, while substantially reducing both the number of model parameters and the quantity of required pretraining sequences. This demonstrates a favorable balance between computational efficiency and statistical optimality.
📝 Abstract
We study in-context learning for nonparametric regression with $\alpha$-H\"older smooth regression functions, for some $\alpha>0$. We prove that, with $n$ in-context examples and $d$-dimensional regression covariates, a pretrained transformer with $\Theta(\log n)$ parameters and $\Omega\bigl(n^{2\alpha/(2\alpha+d)}\log^3 n\bigr)$ pretraining sequences can achieve the minimax-optimal rate of convergence $O\bigl(n^{-2\alpha/(2\alpha+d)}\bigr)$ in mean squared error. Our result requires substantially fewer transformer parameters and pretraining sequences than previous results in the literature. This is achieved by showing that transformers are able to approximate local polynomial estimators efficiently by implementing a kernel-weighted polynomial basis and then running gradient descent.