Greedy Information Projection for LLM Data Selection

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently selecting high-quality and diverse subsets for large language model fine-tuning to reduce computational costs while preserving performance. The authors propose a data selection framework based on mutual information maximization, which uniquely formulates the problem as maximizing the projection of a query embedding matrix onto the subspace spanned by the selected data, thereby unifying quality and diversity objectives. Leveraging a closed-form mutual information objective and an efficient greedy matching pursuit algorithm, the method enables scalable optimization. Experimental results demonstrate that fine-tuning on only a small subset of data selected by this approach achieves performance comparable to full-data fine-tuning on instruction-following and mathematical reasoning tasks, yielding substantial computational savings.

Technology Category

Application Category

📝 Abstract
We present \emph{Greedy Information Projection} (\textsc{GIP}), a principled framework for choosing training examples for large language model fine-tuning. \textsc{GIP} casts selection as maximizing mutual information between a subset of examples and task-specific query signals, which may originate from LLM quality judgments, metadata, or other sources. The framework involves optimizing a closed-form mutual information objective defined using both data and query embeddings, naturally balancing {\it quality} and {\it diversity}. Optimizing this score is equivalent to maximizing the projection of the query embedding matrix onto the span of the selected data, which provides a geometric explanation for the co-emergence of quality and diversity. Building on this view, we employ a fast greedy matching-pursuit procedure with efficient projection-based updates. On instruction-following and mathematical reasoning datasets, \textsc{GIP} selects small subsets that match full-data fine-tuning while using only a fraction of examples and compute, unifying quality-aware and diversity-aware selection for efficient fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

data selection
large language models
fine-tuning
mutual information
efficient training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Greedy Information Projection
mutual information
data selection
large language model fine-tuning
matching pursuit
🔎 Similar Papers
No similar papers found.
V
Victor Ye Dong
Microsoft
K
Kuan-Yun Lee
Microsoft
J
Jiamei Shuai
Microsoft
S
Shengfei Liu
Microsoft
Y
Yi Liu
Microsoft
Jian Jiao
Jian Jiao
Microsoft
Machine LearningNatural Language Processing