🤖 AI Summary
This work challenges the conventional assumption that final-layer representations in large language models (LLMs) are optimal, revealing instead that intermediate-layer hidden states encode richer and more robust semantic information.
Method: We propose the first multidimensional representation quality evaluation framework integrating information-theoretic measures (mutual information, compression ratio), manifold geometry, and perturbation invariance—designed for cross-architectural (Transformer/SSM) and cross-modal (text/vision) validation.
Contribution/Results: Evaluated on 32 text embedding benchmarks, intermediate-layer embeddings consistently outperform final-layer counterparts by an average of 4.2%, demonstrating both statistical consistency and strong generalization across tasks and architectures. This study provides the first empirical evidence establishing the superiority of intermediate-layer representations, thereby introducing a new paradigm for efficient representation extraction, model compression, and interpretability research.
📝 Abstract
From extracting features to generating text, the outputs of large language models (LLMs) typically rely on their final layers, following the conventional wisdom that earlier layers capture only low-level cues. However, our analysis shows that intermediate layers can encode even richer representations, often improving performance on a wide range of downstream tasks. To explain and quantify these hidden-layer properties, we propose a unified framework of representation quality metrics based on information theory, geometry, and invariance to input perturbations. Our framework highlights how each model layer balances information compression and signal preservation, revealing why mid-depth embeddings can exceed the last layer's performance. Through extensive experiments on 32 text-embedding tasks and comparisons across model architectures (transformers, state-space models) and domains (language, vision), we demonstrate that intermediate layers consistently provide stronger features. These findings challenge the standard focus on final-layer embeddings and open new directions for model analysis and optimization, including strategic use of mid-layer representations for more robust and accurate AI systems.