🤖 AI Summary
Small language models (<8B) suffer from catastrophic forgetting during knowledge distillation due to (i) misalignment between training data and the model’s intrinsic capabilities, and (ii) the absence of explicit constraints preserving previously acquired knowledge in conventional objectives.
Method: We propose a dual-path solution: (1) a novel 5K-sample multitask reasoning dataset—first to incorporate metacognitive knowledge annotations—paired with a task-capability matching data selection mechanism; and (2) GDPO (Group Direct Preference Optimization), a preference-based optimization framework that leverages a reference model to implicitly guide parameter updates, jointly optimizing knowledge retention and transfer.
Results: Experiments demonstrate significant mitigation of catastrophic forgetting; under resource-constrained settings, our approach approaches the performance of GRPO while substantially enhancing both reasoning capability and knowledge retention in small models.
📝 Abstract
Large Language Models demonstrate strong reasoning capabilities, which can be effectively compressed into smaller models. However, existing datasets and fine-tuning approaches still face challenges that lead to catastrophic forgetting, particularly for models smaller than 8B. First, most datasets typically ignore the relationship between training data knowledge and the model's inherent abilities, making it difficult to preserve prior knowledge. Second, conventional training objectives often fail to constrain inherent knowledge preservation, which can result in forgetting of previously learned skills. To address these issues, we propose a comprehensive solution that alleviates catastrophic forgetting from both the data and fine-tuning approach perspectives. On the data side, we construct a dataset of 5K instances that covers multiple reasoning tasks and incorporates metacognitive knowledge, making it more tolerant and effective for distillation into smaller models. We annotate the metacognitive knowledge required to solve each question and filter the data based on task knowledge and the model's inherent skills. On the training side, we introduce GDPO (Group Direction Preference Optimization), which is better suited for resource-limited scenarios and can efficiently approximate the performance of GRPO. Guided by the large model and by implicitly constraining the optimization path through a reference model, GDPO enables more effective knowledge transfer from the large model and constrains excessive parameter drift. Extensive experiments demonstrate that our approach significantly alleviates catastrophic forgetting and improves reasoning performance on smaller models.