🤖 AI Summary
This work addresses the lack of a systematic understanding of the cognitive mechanisms underlying large language model (LLM) reasoning. We propose UniCog, a framework that constructs a latent mental space to encode LLMs’ dense activations into sparse, disentangled latent dimensions, thereby unifying the characterization of their cognitive capability usage and dynamic evolution. Our analysis reveals, for the first time, that LLM cognition adheres to a Pareto principle: a shared reasoning core coexists with capability-specific representations. Moreover, we find that reasoning failures often stem from anomalous latent activations, establishing a novel cognitive-inspired paradigm for LLM analysis. Leveraging latent variable modeling, sparse coding, and cross-model comparisons, experiments across six state-of-the-art LLMs demonstrate that our latent-informed candidate prioritization strategy improves reasoning performance by up to 7.5% on multiple challenging benchmarks.
📝 Abstract
A growing body of research suggests that the cognitive processes of large language models (LLMs) differ fundamentally from those of humans. However, existing interpretability methods remain limited in explaining how cognitive abilities are engaged during LLM reasoning. In this paper, we propose UniCog, a unified framework that analyzes LLM cognition via a latent mind space. Formulated as a latent variable model, UniCog encodes diverse abilities from dense model activations into sparse, disentangled latent dimensions. Through extensive analysis on six advanced LLMs, including DeepSeek-V3.2 and GPT-4o, we reveal a Pareto principle of LLM cognition, where a shared reasoning core is complemented by ability-specific signatures. Furthermore, we discover that reasoning failures often manifest as anomalous intensity in latent activations. These findings opens a new paradigm in LLM analysis, providing a cognition grounded view of reasoning dynamics. Finally, leveraging these insights, we introduce a latent-informed candidate prioritization strategy, which improves reasoning performance by up to 7.5% across challenging benchmarks. Our code is available at https://github.com/milksalute/unicog.