🤖 AI Summary
This work addresses the limitations of the linear representation hypothesis for high-level concepts in large language models (LLMs), particularly when modeling ambiguous, context-sensitive concepts—e.g., reliance on single-token counterfactual pairs and difficulty capturing fuzzy semantic contrasts. To overcome these issues, we propose SAND (Spherical ANchored Direction learning), a novel method that for the first time formalizes binary concepts as canonical directions on the unit sphere. SAND employs the von Mises–Fisher (vMF) distribution to perform maximum-likelihood estimation of neural activation differences, thereby eliminating dependence on de-embedding representations and hand-crafted token pairs. Experiments on LLaMA-family models demonstrate that SAND significantly improves the robustness and generalizability of concept direction learning. Moreover, it enhances flexibility and performance in activation engineering tasks—including activation monitoring and intervention—outperforming prior approaches in both stability and applicability across diverse contexts.
📝 Abstract
Linear representation hypothesis posits that high-level concepts are encoded as linear directions in the representation spaces of LLMs. Park et al. (2024) formalize this notion by unifying multiple interpretations of linear representation, such as 1-dimensional subspace representation and interventions, using a causal inner product. However, their framework relies on single-token counterfactual pairs and cannot handle ambiguous contrasting pairs, limiting its applicability to complex or context-dependent concepts. We introduce a new notion of binary concepts as unit vectors in a canonical representation space, and utilize LLMs' (neural) activation differences along with maximum likelihood estimation (MLE) to compute concept directions (i.e., steering vectors). Our method, Sum of Activation-base Normalized Difference (SAND), formalizes the use of activation differences modeled as samples from a von Mises-Fisher (vMF) distribution, providing a principled approach to derive concept directions. We extend the applicability of Park et al. (2024) by eliminating the dependency on unembedding representations and single-token pairs. Through experiments with LLaMA models across diverse concepts and benchmarks, we demonstrate that our lightweight approach offers greater flexibility, superior performance in activation engineering tasks like monitoring and manipulation.