EFPC: Towards Efficient and Flexible Prompt Compression

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead and inference cost induced by long prompts in large language models (LLMs), this paper proposes a unified dual-path prompt compression framework that jointly supports both task-aware and task-agnostic compression. The method leverages GPT-4 to generate high-quality compressed prompts and introduces a probability-driven dynamic fusion mechanism that adaptively selects between original instructions and compressed prompts during both training and inference. It further employs hybrid prompt training and lightweight fine-tuning, requiring only minimal additional data for substantial performance gains. On the LongBench single-document QA benchmark, our approach achieves a 4× compression ratio while improving F1 scores by 4.8% (with +1% data) and 11.4% (with +10% data) over LLMLingua-2. The framework demonstrates high efficiency, strong generalization, and robustness across diverse LLMs and downstream tasks.

Technology Category

Application Category

📝 Abstract
The emergence of large language models (LLMs) like GPT-4 has revolutionized natural language processing (NLP), enabling diverse, complex tasks. However, extensive token counts lead to high computational and financial burdens. To address this, we propose Efficient and Flexible Prompt Compression (EFPC), a novel method unifying task-aware and task-agnostic compression for a favorable accuracy-efficiency trade-off. EFPC uses GPT-4 to generate compressed prompts and integrates them with original prompts for training. During training and inference, we selectively prepend user instructions and compress prompts based on predicted probabilities. EFPC is highly data-efficient, achieving significant performance with minimal data. Compared to the state-of-the-art method LLMLingua-2, EFPC achieves a 4.8% relative improvement in F1-score with 1% additional data at a 4x compression rate, and an 11.4% gain with 10% additional data on the LongBench single-doc QA benchmark. EFPC's unified framework supports broad applicability and enhances performance across various models, tasks, and domains, offering a practical advancement in NLP.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational and financial burdens in NLP tasks
Unifies task-aware and task-agnostic prompt compression methods
Enhances performance with minimal data across various models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies task-aware and task-agnostic prompt compression
Uses GPT-4 for generating compressed prompts
Selectively prepends instructions based on probabilities
🔎 Similar Papers
No similar papers found.
Yun-Hao Cao
Yun-Hao Cao
Nanjing University
machine learningcomputer vision
Y
Yangsong Wang
Huawei Technologies
S
Shuzheng Hao
Huawei Technologies
Z
Zhenxing Li
Huawei Technologies
C
Chengjun Zhan
Huawei Technologies
S
Sichao Liu
Huawei Technologies
Yi-Qi Hu
Yi-Qi Hu
Huawei Technologies