Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues

📅 2024-11-19
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Linear RNNs (LRNNs) such as Mamba and DeltaNet achieve computational efficiency but fail at state-tracking tasks—e.g., parity checking—limiting their applicability in code generation and mathematical reasoning. We identify the root cause: existing architectures employ state transition matrices with only non-negative eigenvalues, severely constraining representational capacity. We formally prove that recognizing regular languages like modulo-3 counting requires non-triangular transition matrices with *negative* eigenvalues. Method: We propose a general construction method for state matrices with eigenvalues in [-1, 1] that explicitly incorporate negative eigenvalues, thereby overcoming the linear RNN expressivity bottleneck. Building upon this, we design an enhanced LRNN architecture and conduct large-scale pretraining (1.3B parameters). Results: Our model successfully solves fundamental state-tracking tasks (e.g., parity checking) and achieves significant improvements in code generation, mathematical reasoning, and general language modeling—demonstrating stable, efficient linear sequence modeling.

Technology Category

Application Category

📝 Abstract
Linear Recurrent Neural Networks (LRNNs) such as Mamba, RWKV, GLA, mLSTM, and DeltaNet have emerged as efficient alternatives to Transformers for long sequences. However, both Transformers and LRNNs struggle to perform state-tracking, which may impair performance in tasks such as code evaluation. In one forward pass, current architectures are unable to solve even parity, the simplest state-tracking task, which non-linear RNNs can handle effectively. Recently, Sarrof et al. (2024) demonstrated that the failure of LRNNs like Mamba to solve parity stems from restricting the value range of their diagonal state-transition matrices to $[0, 1]$ and that incorporating negative values can resolve this issue. We extend this result to non-diagonal LRNNs such as DeltaNet. We prove that finite precision LRNNs with state-transition matrices having only positive eigenvalues cannot solve parity, while non-triangular matrices are needed to count modulo $3$. Notably, we also prove that LRNNs can learn any regular language when their state-transition matrices are products of identity minus vector outer product matrices, each with eigenvalues in the range $[-1, 1]$. Our experiments confirm that extending the eigenvalue range of Mamba and DeltaNet to include negative values not only enables them to solve parity but consistently improves their performance on state-tracking tasks. We also show that state-tracking enabled LRNNs can be pretrained stably and efficiently at scale (1.3B parameters), achieving competitive performance on language modeling and showing promise on code and math tasks.
Problem

Research questions and friction points this paper is trying to address.

Linear RNNs fail state-tracking tasks like parity.
Negative eigenvalues in state-transition matrices enable parity solving.
Extended eigenvalue range improves LRNN performance on state-tracking tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends eigenvalue range to include negatives
Uses non-triangular matrices for modulo counting
Enables stable large-scale pretraining of LRNNs
🔎 Similar Papers
No similar papers found.