🤖 AI Summary
This work addresses the rapid decay of information gain in conventional information-theoretic data selection methods for instruction tuning, which stems from gradient conflicts among samples and impedes effective retention of critical knowledge. To overcome this limitation, the authors propose SPICE, a novel approach that, for the first time, integrates explicit quantification of gradient conflict into an information-based selection framework. By maximizing Fisher information while penalizing conflicting gradients, and leveraging ε-decomposition theory to analyze the impact of conflict on submodularity, SPICE introduces a conflict-aware greedy selection strategy. The method further incorporates a proxy model for acceleration, early stopping, and submodular optimization. Evaluated across eight benchmarks, SPICE matches or surpasses full-data fine-tuning and six state-of-the-art baselines using only 10% of the data, substantially improving both data efficiency and model performance.
📝 Abstract
Information-based data selection for instruction tuning is compelling: maximizing the log-determinant of the Fisher information yields a monotone submodular objective, enabling greedy algorithms to achieve a $(1-1/e)$ approximation under a cardinality budget. In practice, however, we identify alleviating gradient conflicts, misalignment between per-sample gradients, is a key factor that slows down the decay of marginal log-determinant information gains, thereby preventing significant loss of information. We formalize this via an $\varepsilon$-decomposition that quantifies the deviation from ideal submodularity as a function of conflict statistics, yielding data-dependent approximation factors that tighten as conflicts diminish. Guided by this analysis, we propose SPICE, a conflict-aware selector that maximizes information while penalizing misalignment, and that supports early stopping and proxy models for efficiency. Empirically, SPICE selects subsets with higher log-determinant information than original criteria, and these informational gains translate into performance improvements: across 8 benchmarks with LLaMA2-7B and Qwen2-7B, SPICE uses only 10% of the data, yet matches or exceeds 6 methods including full-data tuning. This achieves performance improvements with substantially lower training cost.