🤖 AI Summary
Understanding the functional roles of individual layers in tabular in-context learning (ICL) models—specifically TabPFN and TabICL—remains challenging, hindering model compression and interpretability.
Method: Adopting a “layer-as-painter” perspective, we analyze representational similarity, track layer-wise dynamics during forward propagation, and conduct cross-model comparisons. We introduce the *representational language consistency* metric to quantify layer-specific functional roles.
Contribution/Results: We find that only a small subset of layers maintains stable, task-relevant representations, revealing substantial structural redundancy. Pruning these redundant layers achieves significant model compression with negligible performance degradation (<0.5% accuracy drop). Furthermore, compared to large language models (LLMs), tabular ICL models exhibit earlier layer specialization and more structured redundancy patterns. This work provides both a novel theoretical framework—grounded in representation-space evolution—and empirical evidence for lightweight, interpretable tabular AI.
📝 Abstract
Despite the architectural similarities between tabular in-context learning (ICL) models and large language models (LLMs), little is known about how individual layers contribute to tabular prediction. In this paper, we investigate how the latent spaces evolve across layers in tabular ICL models, identify potential redundant layers, and compare these dynamics with those observed in LLMs. We analyze TabPFN and TabICL through the"layers as painters"perspective, finding that only subsets of layers share a common representational language, suggesting structural redundancy and offering opportunities for model compression and improved interpretability.