Improved Supervised Fine-Tuning for Large Language Models to Mitigate Catastrophic Forgetting

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in supervised fine-tuning (SFT) of open-source large language models (LLMs), where original pretraining data is unavailable, this paper proposes a lightweight optimization method that requires no access to the original SFT dataset. The method comprises three key components: (1) an instruction distribution reconstruction mechanism that synthesizes semantically and difficulty-balanced instruction data from publicly available corpora; (2) a multi-model collaborative filtering strategy that selects high-quality samples based on consensus scoring; and (3) hybrid-data fine-tuning to jointly preserve generalization capabilities and enhance downstream task performance. Experiments across multiple benchmarks demonstrate that the approach significantly mitigates forgetting—maintaining LLMs’ broad generalization while achieving an average 3.2% improvement on downstream tasks. Moreover, it incurs substantially lower computational overhead compared to conventional replay- or regularization-based methods.

Technology Category

Application Category

📝 Abstract
Supervised Fine-Tuning (SFT), while enhancing large language models(LLMs)' instruction-following capabilities and domain-specific task adaptability, often diminishes their general capabilities. Moreover, due to the inaccessibility of original pre-training data, catastrophic forgetting tends to be exacerbated when third-party practitioners implement SFT on open-sourced models. To address this challenge, we propose a novel, more cost-effective SFT method which could effectively reduce the risk of catastrophic forgetting without access to original SFT data. Our approach begins by reconstructing the likely SFT instruction distribution of the base model, followed by a multi-model screening process to select optimal data, which is then mixed with new data for SFT. Experimental results demonstrate that our method preserves generalization capabilities in general domains while improving task-specific performance.
Problem

Research questions and friction points this paper is trying to address.

Mitigate catastrophic forgetting in supervised fine-tuning of LLMs
Enhance task-specific performance without losing general capabilities
Propose cost-effective SFT method without original training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reconstructs base model SFT instruction distribution
Multi-model screening for optimal data selection
Mixes selected data with new data for SFT
🔎 Similar Papers
No similar papers found.