Task-level Distributionally Robust Optimization for Large Language Model-based Dense Retrieval

📅 2024-08-20
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization capability and strong data dependency of Large Language Model-based Dense Retrieval (LLM-DR) across multi-source heterogeneous tasks, this paper proposes task-level Distributionally Robust Optimization (tDRO). It is the first work to introduce distributionally robust optimization into LLM-DR fine-tuning, featuring a learnable, task-level weighting parameterization mechanism coupled with a joint update strategy that scales domain-wise gradients—explicitly modeling cross-domain generalization. The method dynamically reweights task-specific data distributions in an end-to-end manner, enhancing robustness in cross-domain retrieval. Evaluated on mainstream large-scale retrieval benchmarks, tDRO consistently improves retrieval performance; under comparable effectiveness, it reduces training data requirements by up to 30%. Moreover, tDRO is model-agnostic and compatible with LLM-DR architectures of varying scales.

Technology Category

Application Category

📝 Abstract
Large Language Model-based Dense Retrieval (LLM-DR) optimizes over numerous heterogeneous fine-tuning collections from different domains. However, the discussion about its training data distribution is still minimal. Previous studies rely on empirically assigned dataset choices or sampling ratios, which inevitably lead to sub-optimal retrieval performances. In this paper, we propose a new task-level Distributionally Robust Optimization (tDRO) algorithm for LLM-DR fine-tuning, targeted at improving the universal domain generalization ability by end-to-end reweighting the data distribution of each task. The tDRO parameterizes the domain weights and updates them with scaled domain gradients. The optimized weights are then transferred to the LLM-DR fine-tuning to train more robust retrievers. Experiments show optimal improvements in large-scale retrieval benchmarks and reduce up to 30% dataset usage after applying our optimization algorithm with a series of different-sized LLM-DR models.
Problem

Research questions and friction points this paper is trying to address.

Optimizing heterogeneous fine-tuning data for LLM-based dense retrieval
Addressing sub-optimal retrieval from empirical dataset choices
Improving domain generalization via task-level distributional optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-level Distributionally Robust Optimization algorithm
End-to-end reweighting of task data distribution
Parameterized domain weights with scaled gradients
🔎 Similar Papers
No similar papers found.
Guangyuan Ma
Guangyuan Ma
Chinese Academy of Sciences
Information Retrieval
Yongliang Ma
Yongliang Ma
Langboat Technology
LLMRAGInformation RetrievalNatrual Language ProcessingDocument Understanding
X
Xing Wu
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
Z
Zhenpeng Su
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
M
Ming Zhou
Langboat Technology, Beijing, China
S
Songlin Hu
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China