๐ค AI Summary
This work addresses catastrophic forgetting in large multimodal models (LMMs) during continual acquisition of new skills. To mitigate capability drift while preserving prior knowledge, we propose an efficient selective fine-tuning method. By analyzing the correlation between output token distribution shifts and forgetting during fine-tuning, we identify that updating only self-attention projection layers or MLP gating layers suffices to significantly reduce forgetting. We further introduce a counting-bias probe to quantify forgetting and integrate it with a hierarchical parameter freezing strategy for precise knowledge retention. Evaluated across five newly introduced skills, our method achieves substantial performance gains, while maintaining near-original accuracy on eight retained tasksโaverage degradation is under 1.2%. The approach demonstrates robustness and generalizability across multiple mainstream LMM families (e.g., LLaVA, Qwen-VL, InternVL), offering a scalable, low-overhead solution for sustainable capability expansion in multimodal foundation models.
๐ Abstract
How can we teach large multimodal models (LMMs) new skills without erasing prior abilities? We study sequential fine-tuning on five target skills while monitoring general ability on eight held-out benchmarks across three model families. We observe that apparent"forgetting"on held-out tasks after narrow fine-tuning can partly recover at later stages. We trace this behavior to a measurable shift in the output token distribution, manifested through a simple counting-bias probe that co-varies with forgetting. Guided by this picture, we identify two simple, robust tuning recipes that learn strongly while limiting drift: (i) updating only the self-attention projection layers, and (ii) updating only the MLP Gate&Up while freezing the Down projection. Across models and tasks, these choices deliver strong target gains while largely preserving held-out performance. Code is available at https://github.com/jessemelpolio/LMM_CL