π€ AI Summary
This paper addresses the challenge of detecting malicious knowledge edits in large language models (LLMs), which are difficult to identify and often lead to harmful outputs. We introduce Knowledge Edit Type Identification (KETI) as a novel task. To support KETI, we construct KETIBenchβthe first benchmark comprising five categories of harmful edits and one benign category. Empirical analysis reveals that KETI performance is method-agnostic and exhibits strong cross-domain generalization. Methodologically, we propose a lightweight classification framework compatible with SVM, Random Forest, and BERT-family models, enabling analysis of edited outputs from both open- and closed-weight LLMs. Across 42 experimental configurations, seven baseline detectors demonstrate robust performance. All data, annotations, and code are publicly released to ensure reproducibility and facilitate research on malicious knowledge edit detection.
π Abstract
Knowledge editing has emerged as an efficient technology for updating the knowledge of large language models (LLMs), attracting increasing attention in recent years. However, there is a lack of effective measures to prevent the malicious misuse of this technology, which could lead to harmful edits in LLMs. These malicious modifications could cause LLMs to generate toxic content, misleading users into inappropriate actions. In front of this risk, we introduce a new task, Knowledge Editing Type Identification (KETI), aimed at identifying different types of edits in LLMs, thereby providing timely alerts to users when encountering illicit edits. As part of this task, we propose KETIBench, which includes five types of harmful edits covering most popular toxic types, as well as one benign factual edit. We develop four classical classification models and three BERT-based models as baseline identifiers for both open-source and closed-source LLMs. Our experimental results, across 42 trials involving two models and three knowledge editing methods, demonstrate that all seven baseline identifiers achieve decent identification performance, highlighting the feasibility of identifying malicious edits in LLMs. Additional analyses reveal that the performance of the identifiers is independent of the reliability of the knowledge editing methods and exhibits cross-domain generalization, enabling the identification of edits from unknown sources. All data and code are available in https://github.com/xpq-tech/KETI. Warning: This paper contains examples of toxic text.