EARN: Efficient Inference Acceleration for LLM-based Generative Recommendation by Register Tokens

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference latency induced by KV cache in LLM-based recommender systems (LLMRec), this paper proposes EARN, an efficient inference framework. We first uncover two LLMRec-specific attention patterns: inter-layer attention sparsity inversion and head-tail dual attention concentration. Leveraging these insights, EARN introduces a sequence-boundary register token mechanism for information compression: historical interaction sequences are compressed into a small set of register tokens in early layers, and subsequent layers attend exclusively to these tokens—drastically reducing KV cache memory footprint and computational overhead. The method integrates attention pattern analysis, hierarchical information compression, and register token injection, while synergistically combining LLM inference acceleration with recommendation-specific fine-tuning. Experiments across three benchmark datasets, two LLMRec paradigms, and two model architectures demonstrate that EARN achieves up to 3.79× inference speedup and 80.8% KV cache reduction, while outperforming general-purpose fine-tuning baselines in recommendation accuracy.

Technology Category

Application Category

📝 Abstract
Large Language Model-based generative recommendation (LLMRec) has achieved notable success, but it suffers from high inference latency due to massive computational overhead and memory pressure of KV Cache. Existing KV Cache reduction methods face critical limitations: cache compression offers marginal acceleration given recommendation tasks' short decoding steps, while prompt compression risks discarding vital interaction history. Through systematic analysis of attention patterns in LLMRec, we uncover two pivotal insights: 1) layer-wise attention sparsity inversion where early layers retain dense informative patterns while later layers exhibit high redundancy, and 2) dual attention sinks phenomenon where attention scores concentrate on both head and tail tokens of input sequences. Motivated by these insights, we propose EARN, an efficient inference framework that leverages the early layers to compress information into register tokens placed at the input sequence boundaries, then focuses solely on these tokens in the subsequent layers. Extensive experiments on three datasets, two LLMRec methods and two LLM architectures demonstrate EARN's superiority, achieving up to 3.79x speedup and 80.8% KV Cache reduction with better accuracy than the general finetuning approach. Our work bridges the efficiency-effectiveness gap in LLMRec, offering practical deployment advantages for industrial scenarios.
Problem

Research questions and friction points this paper is trying to address.

Reducing high inference latency in LLM-based generative recommendation
Addressing KV Cache inefficiency in recommendation tasks
Optimizing attention patterns for faster and accurate LLMRec
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise attention sparsity inversion analysis
Dual attention sinks phenomenon utilization
Register tokens for efficient KV Cache reduction
🔎 Similar Papers
No similar papers found.
C
Chaoqun Yang
Tsinghua University
Xinyu Lin
Xinyu Lin
National University of Singapore
recommendation
W
Wenjie Wang
University of Science and Technology of China
Y
Yongqi Li
The Hong Kong Polytechnic University
Teng Sun
Teng Sun
Shandong University
Multimedia computinginformation retrievalcausal inference
Xianjing Han
Xianjing Han
National University of Singapore
Multimodal Analysis
T
Tat-Seng Chua
National University of Singapore