An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning

πŸ“… 2023-08-17
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 203
✨ Influential: 10
πŸ“„ PDF
πŸ€– AI Summary
This study investigates catastrophic forgetting in large language models (LLMs) during continual instruction tuning, focusing on domain knowledge, mathematical reasoning (GSM8K), and reading comprehension. We examine the effects of model scale (1B–7B parameters) and architecture (decoder-only BLOOMZ vs. encoder-decoder mT0) using multi-task instruction datasets for continual fine-tuning, complemented by cross-domain evaluation, structured benchmarking, and bias quantification metrics. Key findings are: (1) catastrophic forgetting is pervasive and intensifies with increasing model size; (2) decoder-only architectures exhibit significantly greater forgetting resistance; (3) continual instruction tuning empirically mitigates linguistic biasβ€”a novel finding; and (4) general-purpose instruction tuning effectively suppresses forgetting in subsequent tasks. The work establishes systematic relationships among model scale, architectural choice, and forgetting severity, providing critical empirical grounding for controllable continual learning in LLMs.
πŸ“ Abstract
Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information while acquiring new knowledge for achieving a satisfactory performance in downstream tasks. As large language models (LLMs) have demonstrated remarkable performance, it is intriguing to investigate whether CF exists during the continual instruction tuning of LLMs. This study empirically evaluates the forgetting phenomenon in LLMs' knowledge during continual instruction tuning from the perspectives of domain knowledge, reasoning, and reading comprehension. The experiments reveal that catastrophic forgetting is generally observed in LLMs ranging from 1b to 7b parameters. Surprisingly, as the model scale increases, the severity of forgetting intensifies in such a model sale range which may result from the much significant initial performance in the larger LLM. Comparing the decoder-only model BLOOMZ with the encoder-decoder model mT0, BLOOMZ exhibits less forgetting and retains more knowledge. Interestingly, we also observe that LLMs can mitigate language biases, such as gender bias, during continual fine-tuning. Furthermore, our findings indicate that general instruction tuning can help alleviate the forgetting phenomenon in LLMs during subsequent fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Catastrophic Forgetting
Continuous Learning
Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Catastrophic Forgetting
Continuous Learning
Universal Guidance Tuning
πŸ”Ž Similar Papers
No similar papers found.
Yun Luo
Yun Luo
Shanghai AI Lab
natural language processinggraph neural network
Z
Zhen Yang
Pattern Recognition Center, WeChat AI, Tencent Inc, Beijing, China.
Fandong Meng
Fandong Meng
WeChat AI, Tencent
Machine TranslationNatural Language Processing
Yafu Li
Yafu Li
The Chinese University of Hong Kong
ReasoningTrustworthy AIMultilinguality
J
Jie Zhou
Pattern Recognition Center, WeChat AI, Tencent Inc, Beijing, China.
Y
Yue Zhang
School of Engineering, Westlake University, Hangzhou, 310024, P.R. China.; Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, P.R. China.