Out-of-distribution generalization via composition: a lens through induction heads in Transformers

📅 2024-08-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the zero-shot out-of-distribution (OOD) generalization mechanisms of large language models (e.g., GPT-4) on implicit rule–driven tasks—such as symbolic reasoning and in-context learning—without fine-tuning. We propose the “Shared Bridging Representation” hypothesis, revealing that early and late self-attention layers align within a shared low-dimensional subspace of the embedding space, enabling cross-layer compositional modeling of implicit rules. This establishes, for the first time, a causal link between OOD generalization and structured compositionality. Leveraging Transformer dynamic analysis, multi-model empirical evaluation, and induction-head localization/intervention techniques, we validate the mechanism across diverse symbolic reasoning benchmarks, achieving substantial gains in zero-shot OOD accuracy. Furthermore, we identify transferable bridging subspaces, opening a novel pathway for controllable generalization through targeted representational intervention.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) such as GPT-4 sometimes appear to be creative, solving novel tasks often with a few demonstrations in the prompt. These tasks require the models to generalize on distributions different from those from training data -- which is known as out-of-distribution (OOD) generalization. Despite the tremendous success of LLMs, how they approach OOD generalization remains an open and underexplored question. We examine OOD generalization in settings where instances are generated according to hidden rules, including in-context learning with symbolic reasoning. Models are required to infer the hidden rules behind input prompts without any fine-tuning. We empirically examined the training dynamics of Transformers on a synthetic example and conducted extensive experiments on a variety of pretrained LLMs, focusing on a type of components known as induction heads. We found that OOD generalization and composition are tied together -- models can learn rules by composing two self-attention layers, thereby achieving OOD generalization. Furthermore, a shared latent subspace in the embedding (or feature) space acts as a bridge for composition by aligning early layers and later layers, which we refer to as the common bridge representation hypothesis.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Out-of-Domain Generalization
Unseen Situations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer Models
Compositional Learning Mechanism
Bridging Representations
🔎 Similar Papers
No similar papers found.