🤖 AI Summary
To address the quadratic computational complexity and weak long-range dependency modeling inherent in Transformers, this paper proposes COFFEE—a time-varying state-space model grounded in learnable state feedback. Methodologically, COFFEE replaces conventional input-dependent selective gating with a context-aware, learnable state feedback mechanism for dynamic state modulation, and adopts a redundancy-free parameterization to substantially improve parameter efficiency and training scalability. Empirically, COFFEE achieves near-perfect accuracy on induction-head tasks using only ~1% of the parameters and training data required by baselines; on sequential MNIST classification, it attains 97% accuracy with merely 3,585 parameters—significantly outperforming the S6 baseline. This work establishes a novel paradigm for efficient and scalable long-sequence modeling, advancing state-space models through principled architectural innovations in state dynamics and parameterization.
📝 Abstract
Transformers, powered by the attention mechanism, are the backbone of most foundation models, yet they suffer from quadratic complexity and difficulties in dealing with long-range dependencies in the input sequence. Recent work has shown that state space models (SSMs) provide an efficient alternative, with the S6 module at the core of the Mamba architecture achieving state-of-the-art results on long-sequence benchmarks. In this paper, we introduce the COFFEE (COntext From FEEdback) model, a novel time-varying SSM that incorporates state feedback to enable context-dependent selectivity, while still allowing for parallel implementation. Whereas the selectivity mechanism of S6 only depends on the current input, COFFEE computes it from the internal state, which serves as a compact representation of the sequence history. This shift allows the model to regulate its dynamics based on accumulated context, improving its ability to capture long-range dependencies. In addition to state feedback, we employ an efficient model parametrization that removes redundancies present in S6 and leads to a more compact and trainable formulation. On the induction head task, COFFEE achieves near-perfect accuracy with two orders of magnitude fewer parameters and training sequences compared to S6. On MNIST, COFFEE largely outperforms S6 within the same architecture, reaching 97% accuracy with only 3585 parameters. These results showcase the role of state feedback as a key mechanism for building scalable and efficient sequence models.