🤖 AI Summary
To address catastrophic forgetting in supervised fine-tuning (SFT) of open-source large language models (LLMs), where original pretraining data is unavailable, this paper proposes a lightweight optimization method that requires no access to the original SFT dataset. The method comprises three key components: (1) an instruction distribution reconstruction mechanism that synthesizes semantically and difficulty-balanced instruction data from publicly available corpora; (2) a multi-model collaborative filtering strategy that selects high-quality samples based on consensus scoring; and (3) hybrid-data fine-tuning to jointly preserve generalization capabilities and enhance downstream task performance. Experiments across multiple benchmarks demonstrate that the approach significantly mitigates forgetting—maintaining LLMs’ broad generalization while achieving an average 3.2% improvement on downstream tasks. Moreover, it incurs substantially lower computational overhead compared to conventional replay- or regularization-based methods.
📝 Abstract
Supervised Fine-Tuning (SFT), while enhancing large language models(LLMs)' instruction-following capabilities and domain-specific task adaptability, often diminishes their general capabilities. Moreover, due to the inaccessibility of original pre-training data, catastrophic forgetting tends to be exacerbated when third-party practitioners implement SFT on open-sourced models. To address this challenge, we propose a novel, more cost-effective SFT method which could effectively reduce the risk of catastrophic forgetting without access to original SFT data. Our approach begins by reconstructing the likely SFT instruction distribution of the base model, followed by a multi-model screening process to select optimal data, which is then mixed with new data for SFT. Experimental results demonstrate that our method preserves generalization capabilities in general domains while improving task-specific performance.