Efficient Inference for Large Language Model-based Generative Recommendation

πŸ“… 2024-10-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high inference latency and deployment cost caused by autoregressive decoding in LLM-based generative recommendation, this paper proposes AtSpeedβ€”a novel speculative decoding framework. Methodologically, it introduces (1) a top-K sequence alignment objective (AtSpeed-S/R), the first of its kind, to enhance distributional consistency between the draft model and target LLM over the K candidate sequences generated via beam search; and (2) a relaxed sampling verification mechanism that permits acceptance of high-probability non-top-K draft sequences, overcoming the strict synchronization bottleneck inherent in conventional speculative decoding for recommendation tasks. Evaluated on real-world datasets, AtSpeed achieves 1.9Γ— speedup under strict verification and up to 2.5Γ— under relaxed verification, significantly reducing target LLM invocation counts. The code and datasets are publicly released.

Technology Category

Application Category

πŸ“ Abstract
Large Language Model (LLM)-based generative recommendation has achieved notable success, yet its practical deployment is costly particularly due to excessive inference latency caused by autoregressive decoding. For lossless LLM decoding acceleration, Speculative Decoding (SD) has emerged as a promising solution. However, applying SD to generative recommendation presents unique challenges due to the requirement of generating top-K items (i.e., K distinct token sequences) as a recommendation list by beam search. This leads to more stringent verification in SD, where all the top-K sequences from the target LLM must be successfully drafted by the draft model at each decoding step. To alleviate this, we consider 1) boosting top-K sequence alignment between the draft model and the target LLM, and 2) relaxing the verification strategy to reduce trivial LLM calls. To this end, we propose an alignment framework named AtSpeed, which presents the AtSpeed-S optimization objective for top-K alignment under the strict top-K verification. Moreover, we introduce a relaxed sampling verification strategy that allows high-probability non-top-K drafted sequences to be accepted, significantly reducing LLM calls. Correspondingly, we propose AtSpeed-R for top-K alignment under this relaxed sampling verification. Empirical results on two real-world datasets demonstrate that AtSpeed significantly accelerates LLM-based generative recommendation, e.g., near 2x speedup under strict top-K verification and up to 2.5x speedup under relaxed sampling verification. The codes and datasets are released at https://github.com/Linxyhaha/AtSpeed.
Problem

Research questions and friction points this paper is trying to address.

Reduces inference latency in generative recommendation models.
Addresses challenges in Speculative Decoding for top-K sequences.
Proposes AtSpeed framework for efficient LLM-based recommendation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Decoding acceleration
AtSpeed alignment framework
Relaxed sampling verification
πŸ”Ž Similar Papers
No similar papers found.
Xinyu Lin
Xinyu Lin
National University of Singapore
recommendation
C
Chaoqun Yang
Tsinghua University
W
Wenjie Wang
National University of Singapore
Y
Yongqi Li
The Hong Kong Polytechnic University
Cunxiao Du
Cunxiao Du
Research Scientist at Sea AI Lab
NLPLLM Inference
F
Fuli Feng
University of Science and Technology of China
See-Kiong Ng
See-Kiong Ng
School of Computing and Institute of Data Science, National University of Singapore
artificial intelligencenatural language processingdata miningsmart citiesbioinformatics
T
Tat-Seng Chua
National University of Singapore