Token Prepending: A Training-Free Approach for Eliciting Better Sentence Embeddings from LLMs

📅 2024-12-16
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Causal attention in decoder-only LLMs introduces sentence embedding encoding bias—early tokens cannot attend to subsequent content, leading to information loss and error propagation. To address this, we propose Hierarchical Token Prepending (HTP), a training-free, plug-and-play method that dynamically prepends the sentence embedding generated by the previous Transformer layer to the input sequence of each subsequent layer. This enables early tokens to indirectly capture full-sentence semantics via cross-layer feature re-injection. HTP is the first approach to achieve sentence-level feature re-injection under causal masking without modifying model architecture or parameters, and it is fully compatible with any prompt-based embedding method and autoregressive LLMs (e.g., Llama, Qwen). Extensive evaluation on multiple STS semantic similarity benchmarks and downstream classification tasks demonstrates consistent and significant improvements over strong baselines, with negligible inference overhead.

Technology Category

Application Category

📝 Abstract
Extracting sentence embeddings from large language models (LLMs) is a promising direction, as LLMs have demonstrated stronger semantic understanding capabilities. Previous studies typically focus on prompt engineering to elicit sentence embeddings from LLMs by prompting the model to encode sentence information into the embedding of the last token. However, LLMs are mostly decoder-only models with causal attention and the earlier tokens in the sentence cannot attend to the latter tokens, resulting in biased encoding of sentence information and cascading effects on the final decoded token. To this end, we propose a novel Token Prepending (TP) technique that prepends each layer's decoded sentence embedding to the beginning of the sentence in the next layer's input, allowing earlier tokens to attend to the complete sentence information under the causal attention mechanism. The proposed TP technique is a plug-and-play and training-free technique, which means it can be seamlessly integrated with various prompt-based sentence embedding methods and autoregressive LLMs. Extensive experiments on various Semantic Textual Similarity (STS) tasks and downstream classification tasks demonstrate that our proposed TP technique can significantly improve the performance of existing prompt-based sentence embedding methods across different LLMs, while incurring negligible additional inference cost.
Problem

Research questions and friction points this paper is trying to address.

Biased sentence encoding in decoder-only LLMs due to causal attention
Need for training-free method to improve sentence embeddings
Enhancing semantic understanding without additional inference cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token Prepending for better sentence embeddings
Plug-and-play, training-free integration with LLMs
Improves performance with negligible inference cost
🔎 Similar Papers
No similar papers found.
Yuchen Fu
Yuchen Fu
Nanjing University
计算机视觉、多模态学习
Z
Zifeng Cheng
State Key Laboratory for Novel Software Technology, Nanjing University, China
Zhiwei Jiang
Zhiwei Jiang
Nanjing University
Natural Language Processing
Z
Zhonghui Wang
State Key Laboratory for Novel Software Technology, Nanjing University, China
Y
Yafeng Yin
State Key Laboratory for Novel Software Technology, Nanjing University, China
Z
Zhengliang Li
State Key Laboratory for Novel Software Technology, Nanjing University, China
Qing Gu
Qing Gu
Nanjing University