One for All: Update Parameterized Knowledge Across Multiple Models

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from static knowledge and cumbersome updates, leading to factual hallucinations; existing knowledge editing methods are largely confined to single-model settings, exhibiting limited generalizability and efficiency. This paper proposes OnceEdit—the first plug-and-play framework enabling parameterized knowledge editing across diverse LLMs. Its core innovations are: (1) a dynamic weight token mechanism that injects learnable, lightweight editing signals into model parameters; and (2) an integrated enhancement mechanism that jointly optimizes editing stability and cross-model transferability via model ensembling, selective parameter freezing, and gradient reweighting. Evaluated on multiple LLM benchmarks, OnceEdit achieves a 12.6% improvement in editing accuracy and a 3.2× speedup in inference latency. Crucially, it enables zero-shot transfer of edits to unseen models—marking the first demonstration of cross-architecture knowledge editing and substantially advancing beyond the constraints of single-model editing paradigms.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) encode vast world knowledge but struggle to stay up-to-date, often leading to errors and hallucinations. Knowledge editing offers an efficient alternative to retraining, enabling targeted modifications by updating specific model parameters. However, existing methods primarily focus on individual models, posing challenges in efficiently updating multiple models and adapting to new models. To address this, we propose OnceEdit, a novel ensemble-based approach that employs a plug-in model as the editing module, enabling stable knowledge updates across multiple models. Building on the model ensemble, OnceEdit introduces two key mechanisms to enhance its effectiveness. First, we introduce a dynamic weight mechanism through a weight token for distinguishing between edit-related and non-edit-related instances, ensuring the appropriate utilization of knowledge from integrated models. Second, we incorporate an ensemble enhancement mechanism to mitigate the excessive reliance on the central model inherent in the model ensemble technique, making it more suitable for knowledge editing. Extensive experiments on diverse LLMs demonstrate that OnceEdit consistently outperforms existing methods while achieving superior editing efficiency. Further analysis confirms its adaptability and stability in multi-model editing scenarios. Our code will be available.
Problem

Research questions and friction points this paper is trying to address.

Updating knowledge across multiple LLMs efficiently
Reducing errors from outdated knowledge in LLMs
Enhancing multi-model editing stability and adaptability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble-based approach for multi-model updates
Dynamic weight mechanism for knowledge utilization
Ensemble enhancement to reduce central model reliance
🔎 Similar Papers
No similar papers found.
W
Weitao Ma
Harbin Institute of Technology
X
Xiyuan Du
Harbin Institute of Technology
Xiaocheng Feng
Xiaocheng Feng
Harbin Institute of Technology
NLPDeep Learning MachineLearning
L
Lei Huang
Harbin Institute of Technology
Y
Yichong Huang
Harbin Institute of Technology
H
Huiyi Zhang
Harbin Institute of Technology
X
Xiaoliang Yang
Harbin Institute of Technology
Baohang Li
Baohang Li
Harbin Institute of Technology
Xiachong Feng
Xiachong Feng
The University of Hong Kong (HKU)
Natural Language Processing
T
Ting Liu
Harbin Institute of Technology
Bing Qin
Bing Qin
Professor in Harbin Institute of Technology
Natural Language ProcessingInformation ExtractionSentiment Analysis