Enhancing Input-Label Mapping in In-Context Learning with Contrastive Decoding

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often neglect input-label mappings in in-context learning (ICL) examples, relying excessively on pretrained priors. To address this, we propose In-Context Contrastive Decoding (ICCD), the first method to introduce contrastive learning into zero-shot ICL decoding: it explicitly models semantic mapping relationships via logit-level contrast between positive and negative example outputs—without requiring fine-tuning—thereby enhancing model sensitivity to demonstration structure. ICCD is agnostic to mainstream demonstration selection strategies and compatible with LLMs across diverse scales. Evaluated on seven natural language understanding (NLU) tasks spanning six model sizes, ICCD yields an average improvement of 2.1 percentage points, demonstrating robust, plug-and-play performance. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at a range of tasks through in-context learning (ICL), where only a few task examples guide their predictions. However, prior research highlights that LLMs often overlook input-label mapping information in ICL, relying more on their pre-trained knowledge. To address this issue, we introduce In-Context Contrastive Decoding (ICCD), a novel method that emphasizes input-label mapping by contrasting the output distributions between positive and negative in-context examples. Experiments on 7 natural language understanding (NLU) tasks show that our ICCD method brings consistent and significant improvement (up to +2.1 improvement on average) upon 6 different scales of LLMs without requiring additional training. Our approach is versatile, enhancing performance with various demonstration selection methods, demonstrating its broad applicability and effectiveness. The code and scripts will be publicly released.
Problem

Research questions and friction points this paper is trying to address.

Improves input-label mapping in ICL
Addresses LLMs' reliance on pre-trained knowledge
Enhances performance without additional training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive Decoding
Input-Label Mapping
No Additional Training
🔎 Similar Papers
No similar papers found.