🤖 AI Summary
This work addresses the challenge of efficiently selecting high-quality and diverse subsets for large language model fine-tuning to reduce computational costs while preserving performance. The authors propose a data selection framework based on mutual information maximization, which uniquely formulates the problem as maximizing the projection of a query embedding matrix onto the subspace spanned by the selected data, thereby unifying quality and diversity objectives. Leveraging a closed-form mutual information objective and an efficient greedy matching pursuit algorithm, the method enables scalable optimization. Experimental results demonstrate that fine-tuning on only a small subset of data selected by this approach achieves performance comparable to full-data fine-tuning on instruction-following and mathematical reasoning tasks, yielding substantial computational savings.
📝 Abstract
We present \emph{Greedy Information Projection} (\textsc{GIP}), a principled framework for choosing training examples for large language model fine-tuning. \textsc{GIP} casts selection as maximizing mutual information between a subset of examples and task-specific query signals, which may originate from LLM quality judgments, metadata, or other sources. The framework involves optimizing a closed-form mutual information objective defined using both data and query embeddings, naturally balancing {\it quality} and {\it diversity}. Optimizing this score is equivalent to maximizing the projection of the query embedding matrix onto the span of the selected data, which provides a geometric explanation for the co-emergence of quality and diversity. Building on this view, we employ a fast greedy matching-pursuit procedure with efficient projection-based updates. On instruction-following and mathematical reasoning datasets, \textsc{GIP} selects small subsets that match full-data fine-tuning while using only a fraction of examples and compute, unifying quality-aware and diversity-aware selection for efficient fine-tuning.