SEFE: Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning

📅 2025-05-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses catastrophic forgetting in Multimodal Continual Instruction Tuning (MCIT), introducing for the first time a dichotomous framework distinguishing “surface forgetting” (output format drift) from “essential forgetting” (factual inaccuracies). To jointly mitigate both: (1) we propose Answer Style Diversification (ASD), a paradigm that unifies cross-task output styles to eliminate surface forgetting; and (2) we design RegLoRA, a regularization-enhanced low-rank adaptation method that imposes parameter-level constraints on critical weights to suppress essential forgetting. Evaluated on a dedicated MCIT benchmark, our approach significantly reduces both forgetting rates. It achieves state-of-the-art task backward accuracy and output format compliance, marking the first method to simultaneously guarantee output normativity and factual correctness in continual multimodal instruction tuning.

Technology Category

Application Category

📝 Abstract
Multimodal Continual Instruction Tuning (MCIT) aims to enable Multimodal Large Language Models (MLLMs) to incrementally learn new tasks without catastrophic forgetting. In this paper, we explore forgetting in this context, categorizing it into superficial forgetting and essential forgetting. Superficial forgetting refers to cases where the model's knowledge may not be genuinely lost, but its responses to previous tasks deviate from expected formats due to the influence of subsequent tasks' answer styles, making the results unusable. By contrast, essential forgetting refers to situations where the model provides correctly formatted but factually inaccurate answers, indicating a true loss of knowledge. Assessing essential forgetting necessitates addressing superficial forgetting first, as severe superficial forgetting can obscure the model's knowledge state. Hence, we first introduce the Answer Style Diversification (ASD) paradigm, which defines a standardized process for transforming data styles across different tasks, unifying their training sets into similarly diversified styles to prevent superficial forgetting caused by style shifts. Building on this, we propose RegLoRA to mitigate essential forgetting. RegLoRA stabilizes key parameters where prior knowledge is primarily stored by applying regularization, enabling the model to retain existing competencies. Experimental results demonstrate that our overall method, SEFE, achieves state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in Multimodal Continual Instruction Tuning (MCIT).
Differentiates and mitigates superficial vs essential forgetting in MLLMs.
Introduces ASD and RegLoRA to standardize styles and stabilize key parameters.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Answer Style Diversification prevents superficial forgetting
RegLoRA mitigates essential forgetting via regularization
SEFE combines ASD and RegLoRA for optimal performance