🤖 AI Summary
It remains unclear whether large Transformer models genuinely learn latent conceptual structures during in-context learning (ICL), or instead rely on superficial heuristics—especially in multi-step reasoning tasks.
Method: We propose a mechanistic interpretability framework integrating geometric analysis of representation spaces, controllable task construction to isolate implicit concepts, and layer-wise activation decomposition.
Contribution/Results: We discover, for the first time, highly localized low-dimensional subspaces within the model that geometrically mirror the parameterization of continuous latent concepts. We empirically validate a stepwise concept composition mechanism: in discrete two-hop reasoning, we precisely identify and recompose latent concepts; in continuous parametric tasks, we localize structure-preserving, disentangled low-dimensional subspaces. These findings significantly enhance the interpretability and controllability of ICL, providing direct evidence that Transformers encode and manipulate abstract conceptual structures—not merely surface patterns.
📝 Abstract
When large language models (LLMs) use in-context learning (ICL) to solve a new task, they seem to grasp not only the goal of the task but also core, latent concepts in the demonstration examples. This begs the question of whether transformers represent latent structures as part of their computation or whether they take shortcuts to solve the problem. Prior mechanistic work on ICL does not address this question because it does not sufficiently examine the relationship between the learned representation and the latent concept, and the considered problem settings often involve only single-step reasoning. In this work, we examine how transformers disentangle and use latent concepts. We show that in 2-hop reasoning tasks with a latent, discrete concept, the model successfully identifies the latent concept and does step-by-step concept composition. In tasks parameterized by a continuous latent concept, we find low-dimensional subspaces in the representation space where the geometry mimics the underlying parameterization. Together, these results refine our understanding of ICL and the representation of transformers, and they provide evidence for highly localized structures in the model that disentangle latent concepts in ICL tasks.