Revisiting Bi-Linear State Transitions in Recurrent Neural Networks

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Recurrent Neural Networks (RNNs) traditionally treat hidden units as passive memory buffers, overlooking their potential role in actively computing state transitions. Method: We propose a bilinear state update mechanism—multiplicative interaction between hidden states and input embeddings—as a natural inductive bias for state-tracking tasks, and construct a hierarchy of increasingly expressive bilinear RNN architectures. Contribution/Results: We theoretically establish strong representational capacity of bilinear RNNs for arbitrary state-evolution functions. Empirical evaluation on controlled synthetic benchmarks and real-world tasks demonstrates that bilinear RNNs significantly outperform standard linear RNNs. Notably, modern linear RNNs such as Mamba are shown to be degenerate special cases—the simplest members—of our bilinear hierarchy. This work provides both a novel theoretical foundation and a practical design paradigm for RNN architecture development, redefining hidden units as active computational agents in state evolution.

Technology Category

Application Category

📝 Abstract
The role of hidden units in recurrent neural networks is typically seen as modeling memory, with research focusing on enhancing information retention through gating mechanisms. A less explored perspective views hidden units as active participants in the computation performed by the network, rather than passive memory stores. In this work, we revisit bi-linear operations, which involve multiplicative interactions between hidden units and input embeddings. We demonstrate theoretically and empirically that they constitute a natural inductive bias for representing the evolution of hidden states in state tracking tasks. These are the simplest type of task that require hidden units to actively contribute to the behavior of the network. We also show that bi-linear state updates form a natural hierarchy corresponding to state tracking tasks of increasing complexity, with popular linear recurrent networks such as Mamba residing at the lowest-complexity center of that hierarchy.
Problem

Research questions and friction points this paper is trying to address.

Explores hidden units as active computation participants in RNNs
Investigates bi-linear operations for hidden state evolution in tasks
Establishes hierarchy of bi-linear updates for state tracking complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bi-linear operations for hidden state evolution
Multiplicative interactions between hidden and input units
Hierarchy of state tracking tasks complexity
🔎 Similar Papers
No similar papers found.