🤖 AI Summary
In multi-hop question answering, retrieval-augmented knowledge editing suffers from “edit skipping”—where models ignore edited facts during inference. Method: We propose a guided-decomposition iterative retrieval-augmented knowledge editing framework. It employs dual guidance—single-fact and full-case—to decompose knowledge, align granularities between editing memory and model reasoning, and explicitly model multi-hop inference paths, ensuring precise activation of edited facts within the reasoning chain. Contribution/Results: We are the first to systematically identify and address the root causes of edit skipping: (i) knowledge representation diversity and (ii) misalignment between model inference granularity and editing memory granularity. Experiments demonstrate that our method significantly reduces edit failure rates and achieves state-of-the-art performance across established multi-hop knowledge editing benchmarks.
📝 Abstract
In a rapidly evolving world where information updates swiftly, knowledge in large language models (LLMs) becomes outdated quickly. Retraining LLMs is not a cost-effective option, making knowledge editing (KE) without modifying parameters particularly necessary. We find that although existing retrieval-augmented generation (RAG)-based KE methods excel at editing simple knowledge, they struggle with KE in multi-hop question answering due to the issue of "edit skipping", which refers to skipping the relevant edited fact in inference. In addition to the diversity of natural language expressions of knowledge, edit skipping also arises from the mismatch between the granularity of LLMs in problem-solving and the facts in the edited memory. To address this issue, we propose a novel Iterative Retrieval-Augmented Knowledge Editing method with guided decomposition (IRAKE) through the guidance from single edited facts and entire edited cases. Experimental results demonstrate that IRAKE mitigates the failure of editing caused by edit skipping and outperforms state-of-the-art methods for KE in multi-hop question answering.