🤖 AI Summary
To address the high computational overhead and inference cost induced by long prompts in large language models (LLMs), this paper proposes a unified dual-path prompt compression framework that jointly supports both task-aware and task-agnostic compression. The method leverages GPT-4 to generate high-quality compressed prompts and introduces a probability-driven dynamic fusion mechanism that adaptively selects between original instructions and compressed prompts during both training and inference. It further employs hybrid prompt training and lightweight fine-tuning, requiring only minimal additional data for substantial performance gains. On the LongBench single-document QA benchmark, our approach achieves a 4× compression ratio while improving F1 scores by 4.8% (with +1% data) and 11.4% (with +10% data) over LLMLingua-2. The framework demonstrates high efficiency, strong generalization, and robustness across diverse LLMs and downstream tasks.
📝 Abstract
The emergence of large language models (LLMs) like GPT-4 has revolutionized natural language processing (NLP), enabling diverse, complex tasks. However, extensive token counts lead to high computational and financial burdens. To address this, we propose Efficient and Flexible Prompt Compression (EFPC), a novel method unifying task-aware and task-agnostic compression for a favorable accuracy-efficiency trade-off. EFPC uses GPT-4 to generate compressed prompts and integrates them with original prompts for training. During training and inference, we selectively prepend user instructions and compress prompts based on predicted probabilities. EFPC is highly data-efficient, achieving significant performance with minimal data. Compared to the state-of-the-art method LLMLingua-2, EFPC achieves a 4.8% relative improvement in F1-score with 1% additional data at a 4x compression rate, and an 11.4% gain with 10% additional data on the LongBench single-doc QA benchmark. EFPC's unified framework supports broad applicability and enhances performance across various models, tasks, and domains, offering a practical advancement in NLP.