SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak instruction-following capability in large language models (LLMs) under scarce high-quality supervised fine-tuning (SFT) data, and overfitting induced by imbalanced confidence distribution in semantic space, this paper proposes SFTMix—the first confidence-aware Mixup method tailored for instruction tuning. Our approach dynamically identifies high- and low-confidence samples along the training trajectory, performs semantic-space interpolation on their response pairs, and incorporates confidence-weighted loss with linear behavioral regularization to enable cross-confidence-region supervision signal propagation. Extensive experiments across multiple LLMs and diverse SFT datasets—varying in scale and quality—demonstrate that SFTMix consistently improves instruction adherence and medical-domain task performance. Moreover, it achieves computational efficiency, dataset agnosticism, and strong scalability without requiring architectural modifications or additional inference overhead.

Technology Category

Application Category

📝 Abstract
To acquire instruction-following capabilities, large language models (LLMs) undergo instruction tuning, where they are trained on instruction-response pairs using next-token prediction (NTP). Efforts to improve instruction tuning often focus on higher-quality supervised fine-tuning (SFT) datasets, typically requiring data filtering with proprietary LLMs or human annotation. In this paper, we take a different approach by proposing SFTMix, a novel Mixup-based recipe that elevates LLM instruction tuning beyond the conventional NTP paradigm, without relying on well-curated datasets. Observing that LLMs exhibit uneven confidence across the semantic representation space, we argue that examples with different confidence levels should play distinct roles in instruction tuning--confident data is prone to overfitting, while unconfident data is harder to generalize. Based on this insight, SFTMix leverages training dynamics to identify examples with varying confidence levels, interpolates them to bridge the confidence gap, and applies a Mixup-based regularization to support learning on these additional, interpolated examples. By propagating supervision signals across confidence regions and encouraging linear behavior between them, SFTMix mitigates overfitting in confident examples while enhancing generalization in unconfident ones. We demonstrate the effectiveness of SFTMix in both instruction-following and healthcare-specific SFT tasks, with consistent improvements across LLM families and SFT datasets of varying sizes and qualities. Extensive analyses across six directions highlight SFTMix's compatibility with data selection, adaptability to compute-constrained scenarios, and scalability to broader applications.
Problem

Research questions and friction points this paper is trying to address.

Improves instruction tuning for LLMs
Reduces overfitting in confident data
Enhances generalization in unconfident data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixup-based recipe for LLM tuning
Interpolates varying confidence level examples
Enhances generalization and mitigates overfitting
🔎 Similar Papers
No similar papers found.
Yuxin Xiao
Yuxin Xiao
Massachusetts Institute of Technology
Machine Learning
S
Shujian Zhang
Zoom Video Communications
W
Wenxuan Zhou
Zoom Video Communications
M
Marzyeh Ghassemi
Massachusetts Institute of Technology
Sanqiang Zhao
Sanqiang Zhao
Amazon Alexa AI
Natural Language ProcessingDeep LearningMultimodal