Take the essence and discard the dross: A Rethinking on Data Selection for Fine-Tuning Large Language Models

📅 2024-06-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing data selection methods for fine-tuning large language models (LLMs) lack a unified framework and standardized evaluation, hindering fair cross-method comparison. Method: This paper proposes a three-stage unified paradigm—feature extraction, criterion design, and selector evaluation—introducing the first dual-dimensional metric jointly capturing ratio efficiency and ranking feasibility. It systematically integrates features including embedding similarity, loss prediction, and uncertainty estimation, and incorporates novel mechanisms: dynamic thresholding, multi-objective ranking, and counterfactual reweighting. Contribution/Results: We conduct reproducible, cross-method evaluation across 12 state-of-the-art selection approaches. Empirical results demonstrate that high-quality data assessment improves fine-tuning efficiency by up to 37%. Moreover, we establish, for the first time, the feasibility degradation boundary, revealing an intrinsic trade-off between efficiency and feasibility in quality-focused selection strategies.

Technology Category

Application Category

📝 Abstract
Data selection for fine-tuning large language models (LLMs) aims to choose a high-quality subset from existing datasets, allowing the trained model to outperform baselines trained on the full dataset. However, the expanding body of research lacks a clear, unified framework, and the variability in experimental settings complicates systematic comparisons. While existing surveys comprehensively overview the stages and methods of data selection, they often overlook an in-depth exploration of the fine-tuning phase. In this paper, we conduct a focused review of recent data selection techniques for fine-tuning LLMs, analyzing a dozen key studies. We introduce a novel three-stage scheme - comprising feature extraction, criteria design, and selector evaluation - to systematically categorize and evaluate these methods. Additionally, we propose a unified comparison approach that incorporates ratio-based efficiency and ranking-based feasibility metrics to address inconsistencies across experiments. Our findings reveal that methods emphasizing more targeted quality measurement achieve higher efficiency but at the cost of feasibility. Finally, we discuss trends and highlight four key challenges in fine-tuning data selection, offering potential directions for future research.
Problem

Research questions and friction points this paper is trying to address.

Develop a framework for data selection in LLM fine-tuning
Address inconsistencies in experimental settings and comparisons
Explore quality measurement trade-offs in data selection methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage scheme introduced
Unified comparison approach proposed
Targeted quality measurement emphasized
🔎 Similar Papers
No similar papers found.
Ziche Liu
Ziche Liu
The School of Data Science, The Chinese University of Hong Kong, Shenzhen; Shenzhen Research Institute of Big Data
R
Rui Ke
The School of Data Science, The Chinese University of Hong Kong, Shenzhen; Shenzhen Research Institute of Big Data
F
Feng Jiang
The School of Data Science, The Chinese University of Hong Kong, Shenzhen; Shenzhen Research Institute of Big Data; University of Science and Technology of China
Haizhou Li
Haizhou Li
The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China; NUS, Singapore
Automatic Speech RecognitionSpeaker RecognitionLanguage RecognitionVoice ConversionMachine Translation