🤖 AI Summary
This study investigates the internal representation mechanisms of factual knowledge in large language models (LLMs), introducing the concept of “knowledge circuits”—structured, cross-layer, cross-module computational pathways involving coordinated activation of attention heads and MLPs. Using computational graph tracing, module interaction analysis, knowledge-editing perturbations, and behavioral visualization, we systematically identify stable and reproducible factual knowledge propagation pathways in GPT-2 and TinyLLaMA for the first time. Our contributions are threefold: (1) advancing knowledge modeling from isolated parameters or modules to dynamic computational graphs; (2) enabling circuit-level, mechanistic explanations for hallucination and in-context learning; and (3) empirically demonstrating that prevailing knowledge editing methods disrupt these circuits, while also establishing circuit-aware interventions for diagnostic analysis and targeted correction.
📝 Abstract
The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner workings of how these models store knowledge have long been a subject of intense interest and investigation among researchers. To date, most studies have concentrated on isolated components within these models, such as the Multilayer Perceptrons and attention head. In this paper, we delve into the computation graph of the language model to uncover the knowledge circuits that are instrumental in articulating specific knowledge. The experiments, conducted with GPT2 and TinyLLAMA, have allowed us to observe how certain information heads, relation heads, and Multilayer Perceptrons collaboratively encode knowledge within the model. Moreover, we evaluate the impact of current knowledge editing techniques on these knowledge circuits, providing deeper insights into the functioning and constraints of these editing methodologies. Finally, we utilize knowledge circuits to analyze and interpret language model behaviors such as hallucinations and in-context learning. We believe the knowledge circuits hold potential for advancing our understanding of Transformers and guiding the improved design of knowledge editing. Code and data are available in https://github.com/zjunlp/KnowledgeCircuits.