Blink of an eye: a simple theory for feature localization in generative models

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the origins of unintended behaviors in large language model (LLM) generation, focusing on the phenomenon wherein critical content decisions occur within an extremely narrow temporal window. We propose the theory of *spontaneous localization during generation*: during early inference, the model rapidly converges to a local subpopulation of the training data distribution, triggering abrupt behavioral shifts. Unlike prior approaches, our framework unifies explanations for “critical windows” across both autoregressive and diffusion models—without assuming specific data distributions or relying on stochastic calculus or statistical physics. Instead, it integrates probabilistic localization analysis, information bottleneck principles, and an empirically grounded validation methodology. The theory quantitatively tightens existing theoretical bounds and reveals a deep connection to “all-or-nothing” phenomena in statistical inference. Empirical evaluation on LLMs shows strong correlation between critical windows and failures in mathematical reasoning tasks, establishing a new paradigm for controllable generation and failure diagnosis.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can exhibit undesirable and unexpected behavior in the blink of an eye. In a recent Anthropic demo, Claude switched from coding to Googling pictures of Yellowstone, and these sudden shifts in behavior have also been observed in reasoning patterns and jailbreaks. This phenomenon is not unique to autoregressive models: in diffusion models, key features of the final output are decided in narrow ``critical windows'' of the generation process. In this work we develop a simple, unifying theory to explain this phenomenon. We show that it emerges generically as the generation process localizes to a sub-population of the distribution it models. While critical windows have been studied at length in diffusion models, existing theory heavily relies on strong distributional assumptions and the particulars of Gaussian diffusion. In contrast to existing work our theory (1) applies to autoregressive and diffusion models; (2) makes no distributional assumptions; (3) quantitatively improves previous bounds even when specialized to diffusions; and (4) requires basic tools and no stochastic calculus or statistical physics-based machinery. We also identify an intriguing connection to the all-or-nothing phenomenon from statistical inference. Finally, we validate our predictions empirically for LLMs and find that critical windows often coincide with failures in problem solving for various math and reasoning benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Unintended Behaviors
Generative Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Theory
Generative Models
Critical Decision Moments
🔎 Similar Papers
No similar papers found.