Understanding the Skill Gap in Recurrent Language Models: The Role of the Gather-and-Aggregate Mechanism

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer and State Space Model (SSM) language models exhibit performance bottlenecks on long-sequence algorithmic tasks—e.g., context retrieval—despite architectural differences. Method: We identify a shared, task-critical mechanism—Gather-and-Aggregate (G&A)—in which sparse “Gather Heads” extract salient contextual tokens and “Aggregate Heads” synthesize them into task-relevant representations. Through head-level mechanistic analysis, controlled interventions (masking/replacement), and evaluation across MMLU, GSM8K, BBH, and dialogue benchmarks, we isolate G&A’s causal role. Contribution/Results: We establish the first cross-architecture unification of G&A, showing performance gaps stem from local head implementation—not global architecture—differences. Masking a single Gather or Aggregate head reduces Llama-3.1-8B’s MMLU retrieval accuracy from 66% to 25%. Moreover, injecting lightweight attention-based heads into SSMs significantly enhances long-range retrieval, confirming G&A as a fundamental, portable computational primitive.

Technology Category

Application Category

📝 Abstract
SSMs offer efficient processing of long sequences with fixed state sizes, but struggle with algorithmic tasks like retrieving past context. In this work, we examine how such in-context retrieval operates within Transformer- and SSM-based language models. We find that both architectures develop the same fundamental Gather-and-Aggregate (G&A) mechanism. A Gather Head first identifies and extracts relevant information from the context, which an Aggregate Head then integrates into a final representation. Across both model types, G&A concentrates in just a few heads, making them critical bottlenecks even for benchmarks that require a basic form of retrieval. For example, disabling a single Gather or Aggregate Head of a pruned Llama-3.1-8B degrades its ability to retrieve the correct answer letter in MMLU, reducing accuracy from 66% to 25%. This finding suggests that in-context retrieval can obscure the limited knowledge demands of certain tasks. Despite strong MMLU performance with retrieval intact, the pruned model fails on other knowledge tests. Similar G&A dependencies exist in GSM8K, BBH, and dialogue tasks. Given the significance of G&A in performance, we show that retrieval challenges in SSMs manifest in how they implement G&A, leading to smoother attention patterns rather than the sharp token transitions that effective G&A relies on. Thus, while a gap exists between Transformers and SSMs in implementing in-context retrieval, it is confined to a few heads, not the entire model. This insight suggests a unified explanation for performance differences between Transformers and SSMs while also highlighting ways to combine their strengths. For example, in pretrained hybrid models, attention components naturally take on the role of Aggregate Heads. Similarly, in a pretrained pure SSM, replacing a single G&A head with an attention-based variant significantly improves retrieval.
Problem

Research questions and friction points this paper is trying to address.

Examines in-context retrieval in Transformer- and SSM-based language models
Identifies Gather-and-Aggregate mechanism as critical bottleneck for retrieval tasks
Analyzes performance gap between Transformers and SSMs in retrieval implementation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gather-and-Aggregate mechanism for context retrieval
Few critical heads bottleneck retrieval performance
Hybrid models combine SSM and attention strengths
🔎 Similar Papers
No similar papers found.