Dissecting Linear Recurrent Models: How Different Gating Strategies Drive Selectivity and Generalization

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks for evaluating the selectivity and generalization of linear recurrent models are either overly simplistic or computationally expensive. This work proposes SelectivBench—a lightweight, synthetic benchmark suite generated via rule-based grammars with tunable complexity—to systematically assess models’ capacity to attend to critical information and suppress distractions through controlled experiments. We introduce a refined taxonomy of linear recurrent architectures, incorporating adjustable gap-based interference mechanisms and diverse gating schemes, and compare them against Softmax attention. Experiments demonstrate that SelectivBench’s evaluations align with performance on large-scale language tasks: rapid forgetting and gating facilitate memory retrieval, while intra-state channel mixing is crucial for generalization, though not strictly necessary for selectivity.

Technology Category

Application Category

📝 Abstract
Linear recurrent neural networks have emerged as efficient alternatives to the original Transformer's softmax attention mechanism, thanks to their highly parallelizable training and constant memory and computation requirements at inference. Iterative refinements of these models have introduced an increasing number of architectural mechanisms, leading to increased complexity and computational costs. Nevertheless, systematic direct comparisons among these models remain limited. Existing benchmark tasks are either too simplistic to reveal substantial differences or excessively resource-intensive for experimentation. In this work, we propose a refined taxonomy of linear recurrent models and introduce SelectivBench, a set of lightweight and customizable synthetic benchmark tasks for systematically evaluating sequence models. SelectivBench specifically evaluates selectivity in sequence models at small to medium scale, such as the capacity to focus on relevant inputs while ignoring context-based distractors. It employs rule-based grammars to generate sequences with adjustable complexity, incorporating irregular gaps that intentionally violate transition rules. Evaluations of linear recurrent models on SelectivBench reveal performance patterns consistent with results from large-scale language tasks. Our analysis clarifies the roles of essential architectural features: gating and rapid forgetting mechanisms facilitate recall, in-state channel mixing is unnecessary for selectivity, but critical for generalization, and softmax attention remains dominant due to its memory capacity scaling with sequence length. Our benchmark enables targeted, efficient exploration of linear recurrent models and provides a controlled setting for studying behaviors observed in large-scale evaluations. Code is available at https://github.com/symseqbench/selectivbench
Problem

Research questions and friction points this paper is trying to address.

linear recurrent models
selectivity
generalization
benchmarking
sequence modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

linear recurrent models
gating mechanisms
selectivity
synthetic benchmark
sequence modeling
🔎 Similar Papers
No similar papers found.
Y
Younes Bouhadjar
Peter Grünberg Institute, Neuromorphic Software Ecosystems (PGI-15), Jülich Research Centre, Germany
M
Maxime Fabre
Peter Grünberg Institute, Neuromorphic Software Ecosystems (PGI-15), Jülich Research Centre, Germany; Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen
F
Felix Schmidt
Peter Grünberg Institute, Neuromorphic Software Ecosystems (PGI-15), Jülich Research Centre, Germany; RWTH Aachen University, Aachen, Germany
Emre Neftci
Emre Neftci
Institute Director, Forschungszentrum Jülich; Professor, RWTH Aachen
Neuromorphic EngineeringComputational NeuroscienceCognitive Systems and BehaviorMachine Learning