🤖 AI Summary
This work investigates the implicit modeling of data-generative structure in large language models’ (LLMs) in-context learning (ICL). Addressing ICL’s parameter-free yet generalizable behavior, we propose a “dual-convergence” theoretical framework: we prove that implicit representations simultaneously converge to low-frequency smooth structures (frequency domain) and geometrically coherent structures (spatial domain) during ICL, thereby explaining the coexistence of local distortions and global order. Through representation convergence analysis and frequency-domain energy decay modeling, we theoretically derive and empirically validate ICL’s intrinsic robustness to high-frequency noise, revealing a distinctive “energy decays without dispersing” representation property. This study provides the first systematic characterization of ICL’s implicit inductive bias, establishing a novel paradigm for understanding its generalization mechanism.
📝 Abstract
In-context learning (ICL) enables large language models (LLMs) to acquire new behaviors from the input sequence alone without any parameter updates. Recent studies have shown that ICL can surpass the original meaning learned in pretraining stage through internalizing the structure the data-generating process (DGP) of the prompt into the hidden representations. However, the mechanisms by which LLMs achieve this ability is left open. In this paper, we present the first rigorous explanation of such phenomena by introducing a unified framework of double convergence, where hidden representations converge both over context and across layers. This double convergence process leads to an implicit bias towards smooth (low-frequency) representations, which we prove analytically and verify empirically. Our theory explains several open empirical observations, including why learned representations exhibit globally structured but locally distorted geometry, and why their total energy decays without vanishing. Moreover, our theory predicts that ICL has an intrinsic robustness towards high-frequency noise, which we empirically confirm. These results provide new insights into the underlying mechanisms of ICL, and a theoretical foundation to study it that hopefully extends to more general data distributions and settings.