A Framework for Quantifying How Pre-Training and Context Benefit In-Context Learning

πŸ“… 2025-10-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work investigates the intrinsic mechanisms underlying in-context learning (ICL) in pretrained language models, specifically how context modulates the output distribution under distributional mismatch between pretraining and downstream task distributions. Method: We propose the first quantifiable analytical framework for ICL, modeling data generation, token encoding, and prompt construction via a single-layer Transformer. Contribution/Results: We establish the first precise theoretical relationship among context length, KL divergence between distributions, and ICL performance. We prove that appropriately constructed context progressively steers the model’s output distribution toward the target task distribution. Empirical validation confirms that ICL performance improves with increasing context length and decreasing distributional divergence, revealing a synergistic interplay between pretraining and context design in realistic settings. Our framework provides both theoretical grounding and actionable insights for optimizing ICL through principled context construction.

Technology Category

Application Category

πŸ“ Abstract
Pre-trained large language models have demonstrated a strong ability to learn from context, known as in-context learning (ICL). Despite a surge of recent applications that leverage such capabilities, it is by no means clear, at least theoretically, how the ICL capabilities arise, and in particular, what is the precise role played by key factors such as pre-training procedure as well as context construction. In this work, we propose a new framework to analyze the ICL performance, for a class of realistic settings, which includes network architectures, data encoding, data generation, and prompt construction process. As a first step, we construct a simple example with a one-layer transformer, and show an interesting result, namely when the pre-train data distribution is different from the query task distribution, a properly constructed context can shift the output distribution towards the query task distribution, in a quantifiable manner, leading to accurate prediction on the query topic. We then extend the findings in the previous step to a more general case, and derive the precise relationship between ICL performance, context length and the KL divergence between pre-train and query task distribution. Finally, we provide experiments to validate our theoretical results.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how pre-training and context construction enable in-context learning
Quantifying context's role in shifting output toward query task distribution
Establishing relationship between ICL performance and distribution divergence metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework analyzes in-context learning performance quantitatively
Context shifts output distribution toward query task distribution
Derives relationship between performance, context length, distribution divergence
πŸ”Ž Similar Papers
No similar papers found.