Learning without training: The implicit dynamics of in-context learning

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit zero-shot learning via in-context examples during inference, yet the underlying mechanism remains poorly understood. This work investigates the implicit dynamics of in-context learning (ICL) through a combined theoretical and empirical approach. We show that Transformer blocks—via synergistic interaction between self-attention and MLP layers—implicitly encode contextual inputs as low-rank updates to MLP weights, without gradient-based optimization or explicit parameter modification. We formally model this process and empirically validate it across multiple architectures and tasks. Our analysis reveals that standard Transformer architectures inherently possess dynamic weight modulation capabilities. This work provides the first rigorous mathematical characterization of ICL, enhancing interpretability and advancing our understanding of “online learning” during LLM inference. Moreover, it establishes a foundational theoretical framework for designing efficient, lightweight adaptation methods that leverage inherent architectural plasticity rather than external fine-tuning.

Technology Category

Application Category

📝 Abstract
One of the most striking features of Large Language Models (LLM) is their ability to learn in context. Namely at inference time an LLM is able to learn new patterns without any additional weight update when these patterns are presented in the form of examples in the prompt, even if these patterns were not seen during training. The mechanisms through which this can happen are still largely unknown. In this work, we show that the stacking of a self-attention layer with an MLP, allows the transformer block to implicitly modify the weights of the MLP layer according to the context. We argue through theory and experimentation that this simple mechanism may be the reason why LLMs can learn in context and not only during training. Specifically, we show under mild simplifying assumptions how a transformer block implicitly transforms a context into a low-rank weight-update of the MLP layer.
Problem

Research questions and friction points this paper is trying to address.

Understanding how LLMs learn new patterns without training
Exploring implicit weight modification in transformer blocks
Mechanism behind in-context learning in large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-attention layer with MLP enables implicit weight modification
Transformer block transforms context into low-rank weight-update
LLMs learn in-context without additional training updates
🔎 Similar Papers
No similar papers found.