🤖 AI Summary
Existing model editing methods—such as inference-time interventions based on activation vector addition or weight updates—are empirically effective but rely on heuristic rules lacking theoretical grounding. This paper establishes the first rigorous theoretical foundation for the “prompt → weight” mapping, proposing a general implicit weight-mapping framework. By analytically characterizing how prompt information is represented, propagated, and composed across Transformer layers—specifically within multi-head attention and feed-forward subnetworks—we derive a mathematically principled mapping from input text to reusable thought vectors/matrices. The framework is token-agnostic, supports deep multi-block architectures, and formally explains why prevalent intervention strategies succeed. Empirical evaluation demonstrates its efficacy in behavioral control tasks. Our work introduces the first architecture-driven theoretical paradigm for model editing.
📝 Abstract
A growing body of research has demonstrated that the behavior of large language models can be effectively controlled at inference time by directly modifying their internal states, either through vector additions to their activations or through updates to their weight matrices. These techniques, while powerful, are often guided by empirical heuristics, such as deriving steering vectors from the average activations of contrastive prompts. This work provides a theoretical foundation for these interventions, explaining how they emerge from the fundamental computations of the transformer architecture. Building on the recent finding that a prompt's influence can be mathematically mapped to implicit weight updates (Dherin et al., 2025), we generalize this theory to deep, multi-block transformers. We show how the information contained in any chunk of a user prompt is represented and composed internally through weight vectors and weight matrices. We then derive a principled method for condensing this information into token-independent thought vectors and thought matrices. These constructs provide a theoretical explanation for existing vector- and matrix-based model editing techniques and offer a direct, computationally-grounded method for transmuting textual input into reusable weight updates.