🤖 AI Summary
Vision-language models (VLMs) suffer from high inference costs, and existing layer-skipping strategies lack principled theoretical foundations. Method: This paper introduces the first unified analytical framework grounded in information theory and statistical learning theory to characterize the evolution of hidden representations. It formally establishes necessary and sufficient conditions for safe layer skipping: a layer may be omitted if the information it introduces is redundant relative to the minimal sufficient statistic required for the downstream task. Contribution/Results: Empirical validation confirms strong alignment between theoretical predictions and actual skipable layers. Guided by the framework, skipping redundant layers yields an average 23% speedup with no performance degradation; violating the condition causes significant accuracy loss. The work provides an interpretable, generalizable theoretical foundation and practical design principles for efficient VLM inference.
📝 Abstract
Vision-language models (VLMs) achieve incredible performance across a wide range of tasks, but their large size makes inference costly. Recent work shows that selectively skipping VLM layers can improve efficiency with minimal performance loss or even performance improvements. However, this technique remains underused due to the limited understanding of when layer skipping is beneficial. In this paper, we develop a framework that uses information and learning theory to characterize the conditions under which layer skipping enhances efficiency without sacrificing performance. Motivated by these observations, we analyze the evolution of the VLM's hidden representations through the LLM backbone and show that layers with large redundancy as predicted by our framework coincide with those skipped by popular layer-skipping methods in practice, providing a unified theoretical scaffolding for multiple efficient inference techniques. Our experiments demonstrate that skipping such layers yields faster inference that preserves performance, and also show that applying skipping outside these conditions leads to model degradation.