MindBridge: Scalable and Cross-Model Knowledge Editing via Memory-Augmented Modality

πŸ“… 2025-03-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the non-transferability and redundancy of knowledge editing across frequently updated large language models (LLMs), this paper proposes the first cross-model knowledge editing paradigm. Our core innovation is the introduction of a *memory modality*β€”an architecture-agnostic, intermediate representation that independently encodes and stores edited knowledge, decoupled from any specific LLM. Leveraging multimodal-inspired pretraining of the memory modality and a plug-and-play LLM integration mechanism, our approach enables one-time editing with seamless reuse across diverse models. Extensive evaluation on major LLMs (e.g., Llama, Qwen, ChatGLM) and standard knowledge editing benchmarks (FEVER, zsRE, CounterFact) demonstrates scalability to tens of thousands of factual edits, strong generalization, and an average cross-model edit retention rate of 89.3%, significantly outperforming existing methods.

Technology Category

Application Category

πŸ“ Abstract
Knowledge editing is a technique for efficiently and accurately updating the knowledge of large language models (LLMs) to alleviate obsolescence and correct errors. However, most existing methods overfit to specific models, causing edited knowledge to be discarded during each LLM update and requiring frequent re-editing, which is particularly burdensome in today's rapidly evolving open-source community. To address this issue, we propose the problem of cross-model knowledge editing and introduce MindBridge, a scalable solution inspired by the low coupling between modality processing and LLMs in multi-modal models. MindBridge introduces the novel concept of memory modality, which encodes edited knowledge as an independent modality. It first performs LLM-agnostic pre-training of the memory modality and then integrates it with various LLMs. Extensive experiments on multiple LLMs and popular knowledge editing datasets demonstrate that MindBridge achieves superior performance even in editing tens of thousands of knowledge entries and can flexibly adapt to different LLMs. Our code is available at https://github.com/CrashBugger/MindBridge.
Problem

Research questions and friction points this paper is trying to address.

Addresses overfitting in knowledge editing for LLMs
Enables cross-model knowledge editing scalability
Introduces memory modality for flexible LLM integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memory modality encodes edited knowledge independently
Pre-trains memory modality agnostic to LLMs
Integrates memory modality with various LLMs
πŸ”Ž Similar Papers
No similar papers found.
Shuaike Li
Shuaike Li
University of Science and Technology of China
K
Kai Zhang
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Q
Qi Liu
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Enhong Chen
Enhong Chen
University of Science and Technology of China
data miningrecommender systemmachine learning