🤖 AI Summary
This work systematically compares how State Space Models (SSMs) and Transformers propagate contextual representations in long-sequence modeling. We propose the first unified analytical framework—integrating centered kernel alignment, stability metrics, probing experiments, and parameter randomization—to quantify inter-layer and inter-token information flow differences. Our analysis reveals that Transformers suffer from rapid representational homogenization (over-smoothing) of early tokens due to self-attention, whereas SSMs preserve representational diversity initially and converge gradually in deeper layers. Crucially, Transformer inductive bias arises primarily from architectural design, while SSM behavior is predominantly shaped by training dynamics. This study provides the first principled, interpretable characterization of fundamental representational divergence between these architectures, yielding actionable design principles and optimization guidelines for long-context modeling. (149 words)
📝 Abstract
State Space Models (SSMs) have recently emerged as efficient alternatives to Transformer-Based Models (TBMs) for long-sequence processing, offering linear scaling and lower memory use. Yet, how contextual information flows across layers and tokens in these architectures remains understudied. We present the first unified, token- and layer-level analysis of representation propagation in SSMs and TBMs. Using centered kernel alignment, stability metrics, and probing, we characterize how representations evolve within and across layers. We find a key divergence: TBMs rapidly homogenize token representations, with diversity reemerging only in later layers, while SSMs preserve token uniqueness early but converge to homogenization deeper. Theoretical analysis and parameter randomization further reveal that oversmoothing in TBMs stems from architectural design, whereas in SSMs it arises mainly from training dynamics. These insights clarify the inductive biases of both architectures and inform future model and training designs for long-context reasoning.