🤖 AI Summary
This work proposes HyenaRec, a novel sequence recommendation model that addresses the high computational complexity and the trade-off between efficiency and accuracy in existing attention-based approaches when handling long user histories. By introducing Legendre orthogonal polynomial-parameterized convolutional kernels into recommender systems for the first time, HyenaRec integrates a gating mechanism with Hyena operators to construct a hybrid architecture with linear complexity. This design effectively captures both short- and long-term user behavioral patterns. The proposed method significantly enhances representational capacity and inference efficiency on sparse, long sequences, outperforming state-of-the-art baselines across multiple real-world datasets. Notably, HyenaRec achieves up to a 6× speedup in training while maintaining high recommendation accuracy, particularly in long-sequence scenarios.
📝 Abstract
Sequential recommendation models, particularly those based on attention, achieve strong accuracy but incur quadratic complexity, making long user histories prohibitively expensive. Sub-quadratic operators such as Hyena provide efficient alternatives in language modeling, but their potential in recommendation remains underexplored. We argue that Hyena faces challenges in recommendation due to limited representation capacity on sparse, long user sequences. To address these challenges, we propose HyenaRec, a novel sequential recommender that integrates polynomial-based kernel parameterization with gated convolutions. Specifically, we design convolutional kernels using Legendre orthogonal polynomials, which provides a smooth and compact basis for modeling long-term temporal dependencies. A complementary gating mechanism captures fine-grained short-term behavioral bursts, yielding a hybrid architecture that balances global temporal evolution with localized user interests under sparse feedback. This construction enhances expressiveness while scaling linearly with sequence length. Extensive experiments on multiple real-world datasets demonstrate that HyenaRec consistently outperforms Attention-, Recurrent-, and other baselines in ranking accuracy. Moreover, it trains significantly faster (up to 6x speedup), with particularly pronounced advantages on long-sequence scenarios where efficiency is maintained without sacrificing accuracy. These results highlight polynomial-based kernel parameterization as a principled and scalable alternative to attention for sequential recommendation.