🤖 AI Summary
Knowledge editing often triggers ripple effects—unintended degradation in model reasoning over logically related facts following an edit. To address this, we propose GradSim, a novel metric quantifying cosine similarity between parameter gradients induced by the edited fact and those induced by logically associated knowledge. GradSim is the first gradient-consistency-based indicator that systematically reveals both the mechanistic origin and detectability of ripple effects. Through extensive empirical analysis across diverse large language models (LLaMA, Qwen), editing methods (ROME, MEMIT), and languages (English, Chinese), we demonstrate that GradSim exhibits strong positive correlation with ripple intensity (r > 0.82). Moreover, it successfully explains counterintuitive phenomena—including failure of negation reasoning, excessive propagation of editing effects, and collapse of cross-lingual transfer. GradSim thus provides the first interpretable, generalizable diagnostic tool for assessing controllability in knowledge editing.
📝 Abstract
Extensive previous research has focused on post-training knowledge editing (KE) for language models (LMs) to ensure that knowledge remains accurate and up-to-date. One desired property and open question in KE is to let edited LMs correctly handle ripple effects, where LM is expected to answer its logically related knowledge accurately. In this paper, we answer the question of why most KE methods still create messy ripple effects. We conduct extensive analysis and identify a salient indicator, GradSim, that effectively reveals when and why updated knowledge ripples in LMs. GradSim is computed by the cosine similarity between gradients of the original fact and its related knowledge. We observe a strong positive correlation between ripple effect performance and GradSim across different LMs, KE methods, and evaluation metrics. Further investigations into three counter-intuitive failure cases (Negation, Over-Ripple, Multi-Lingual) of ripple effects demonstrate that these failures are often associated with very low GradSim. This finding validates that GradSim is an effective indicator of when knowledge ripples in LMs.