TDR: Task-Decoupled Retrieval with Fine-Grained LLM Feedback for In-Context Learning

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In-context learning (ICL), cross-task example retrieval faces two key challenges: (1) inter-task data distribution entanglement, and (2) misalignment between retrieved examples and large language model (LLM) feedback at a fine-grained level. To address these, we propose Task-Decoupled Retrieval (TDR), the first framework to explicitly decouple cross-task data distributions and establish an LLM-based fine-grained feedback mechanism for retriever training. TDR supports plug-and-play deployment and multi-model adaptation. It integrates task-aware retrieval strategies with fine-grained feedback modeling, achieving significant ICL performance gains across 30 diverse NLP tasks. Empirical results show consistent improvements in average accuracy over state-of-the-art methods, with strong generalization across unseen tasks and models—establishing new SOTA performance.

Technology Category

Application Category

📝 Abstract
In-context learning (ICL) has become a classic approach for enabling LLMs to handle various tasks based on a few input-output examples. The effectiveness of ICL heavily relies on the quality of these examples, and previous works which focused on enhancing example retrieval capabilities have achieved impressive performances. However, two challenges remain in retrieving high-quality examples: (1) Difficulty in distinguishing cross-task data distributions, (2) Difficulty in making the fine-grained connection between retriever output and feedback from LLMs. In this paper, we propose a novel framework called TDR. TDR decouples the ICL examples from different tasks, which enables the retrieval module to retrieve examples specific to the target task within a multi-task dataset. Furthermore, TDR models fine-grained feedback from LLMs to supervise and guide the training of the retrieval module, which helps to retrieve high-quality examples. We conducted extensive experiments on a suite of 30 NLP tasks, the results demonstrate that TDR consistently improved results across all datasets and achieves state-of-the-art performance. Meanwhile, our approach is a plug-and-play method, which can be easily combined with various LLMs to improve example retrieval abilities for ICL. The code is available at https://github.com/Nnn-s/TDR.
Problem

Research questions and friction points this paper is trying to address.

Distinguishing cross-task data distributions in retrieval
Linking retriever output to fine-grained LLM feedback
Enhancing example quality for in-context learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples ICL examples from different tasks
Models fine-grained feedback from LLMs
Plug-and-play method for various LLMs
🔎 Similar Papers
No similar papers found.
Y
Yifu Chen
Meituan
B
Bingchen Huang
Meituan
Z
Zhiling Wang
Meituan
Y
Yuanchao Du
Meituan
J
Junfeng Luo
Meituan
L
Lei Shen
Meituan
Zhineng Chen
Zhineng Chen
Institute of Trustworthy Embodied AI, Fudan University
Computer VisionOCRMultimedia Analysis