🤖 AI Summary
To address the weak generalization capability and strong data dependency of Large Language Model-based Dense Retrieval (LLM-DR) across multi-source heterogeneous tasks, this paper proposes task-level Distributionally Robust Optimization (tDRO). It is the first work to introduce distributionally robust optimization into LLM-DR fine-tuning, featuring a learnable, task-level weighting parameterization mechanism coupled with a joint update strategy that scales domain-wise gradients—explicitly modeling cross-domain generalization. The method dynamically reweights task-specific data distributions in an end-to-end manner, enhancing robustness in cross-domain retrieval. Evaluated on mainstream large-scale retrieval benchmarks, tDRO consistently improves retrieval performance; under comparable effectiveness, it reduces training data requirements by up to 30%. Moreover, tDRO is model-agnostic and compatible with LLM-DR architectures of varying scales.
📝 Abstract
Large Language Model-based Dense Retrieval (LLM-DR) optimizes over numerous heterogeneous fine-tuning collections from different domains. However, the discussion about its training data distribution is still minimal. Previous studies rely on empirically assigned dataset choices or sampling ratios, which inevitably lead to sub-optimal retrieval performances. In this paper, we propose a new task-level Distributionally Robust Optimization (tDRO) algorithm for LLM-DR fine-tuning, targeted at improving the universal domain generalization ability by end-to-end reweighting the data distribution of each task. The tDRO parameterizes the domain weights and updates them with scaled domain gradients. The optimized weights are then transferred to the LLM-DR fine-tuning to train more robust retrievers. Experiments show optimal improvements in large-scale retrieval benchmarks and reduce up to 30% dataset usage after applying our optimization algorithm with a series of different-sized LLM-DR models.