The Model Agreed, But Didn't Learn: Diagnosing Surface Compliance in Large Language Models

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current knowledge editing methods perform well on standard benchmarks yet struggle to demonstrate genuine modification of a model’s internal memory, raising concerns about “superficial compliance.” This work proposes a discriminative self-evaluation diagnostic framework grounded in in-context learning (ICL), which—through multi-round editing experiments and probing analyses—reveals that prevailing editing techniques achieve only output alignment without substantively updating the model’s internal beliefs. Moreover, the study uncovers that recursive editing induces memory remnants and cognitive instability, potentially causing irreversible damage to memory reversibility. By challenging the prevailing paradigms for evaluating editing efficacy, this research establishes a new diagnostic benchmark for assessing authentic knowledge updates in language models.
📝 Abstract
Large Language Models (LLMs) internalize vast world knowledge as parametric memory, yet inevitably inherit the staleness and errors of their source corpora. Consequently, ensuring the reliability and malleability of these internal representations is imperative for trustworthy real-world deployment. Knowledge editing offers a pivotal paradigm for surgically modifying memory without retraining. However, while recent editors demonstrate high success rates on standard benchmarks, it remains questionable whether current evaluation frameworks that rely on assessing output under specific prompting conditions can reliably authenticate genuine memory modification. In this work, we introduce a simple diagnostic framework that subjects models to discriminative self-assessment under in-context learning (ICL) settings that better reflect real-world application environments, specifically designed to scrutinize the subtle behavioral nuances induced by memory modifications. This probing reveals a pervasive phenomenon of Surface Compliance, where editors achieve high benchmark scores by merely mimicking target outputs without structurally overwriting internal beliefs. Moreover, we find that recursive modifications accumulate representational residues, triggering cognitive instability and permanently diminishing the reversibility of the model's memory state. These insights underscore the risks of current editing paradigms and highlight the pivotal role of robust memory modification in building trustworthy, long-term sustainable LLM systems. Code is available at https://github.com/XiaojieGu/SA-MCQ.
Problem

Research questions and friction points this paper is trying to address.

knowledge editing
surface compliance
memory modification
large language models
cognitive instability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surface Compliance
Knowledge Editing
In-Context Learning
Memory Modification
Large Language Models