🤖 AI Summary
This work uncovers a three-scale geometric structure in concept vectors extracted by sparse autoencoders (SAEs) from large language models (LLMs): (i) a fine-scale “crystalline” atomic structure governed by analogical relations; (ii) a mesoscale modular “lobular” organization—spatially localized and functionally specialized into domains such as mathematics and code; and (iii) a coarse-scale anisotropic, layer-dependent “galactic” feature cloud. Methodologically, we provide the first systematic characterization of this multiscale geometry and introduce Linear Discriminant Analysis (LDA) to suppress global confounding directions, thereby enhancing analogical fidelity. Contributions include: quantitative validation of lobular spatial locality and depth-dependent clustering entropy; discovery that intermediate-layer feature clouds exhibit the steepest power-law spectral decay; and substantial improvement in concept analogy accuracy—establishing an interpretable geometric framework for probing LLM internal representations.
📝 Abstract
Sparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept universe has interesting structure at three levels: (1) The “atomic” small-scale structure contains “crystals” whose faces are parallelograms or trapezoids, generalizing well-known examples such as (man:woman::king:queen). We find that the quality of such parallelograms and associated function vectors improves greatly when projecting out global distractor directions such as word length, which is efficiently performed with linear discriminant analysis. (2) The “brain” intermediate-scale structure has significant spatial modularity; for example, math and code features form a “lobe” akin to functional lobes seen in neural fMRI images. We quantify the spatial locality of these lobes with multiple metrics and find that clusters of co-occurring features, at coarse enough scale, also cluster together spatially far more than one would expect if feature geometry were random. (3) The “galaxy”-scale large-scale structure of the feature point cloud is not isotropic, but instead has a power law of eigenvalues with steepest slope in middle layers. We also quantify how the clustering entropy depends on the layer.