Spectral Insights into Data-Oblivious Critical Layers in Large Language Models

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the limitation of downstream-data-dependent critical layer identification in pretrained large language models (LLMs), which hinders generalizability. We propose a data-agnostic spectral analysis method—centered kernel alignment (CKA)-based feature-space evolution modeling—to locate intrinsic critical layers without any downstream task data. Our contributions are threefold: (1) We uncover a principal component mechanism underlying representation-space phase transitions aligned with semantic evolution (e.g., reasoning → conclusion); (2) We identify a universal dual pattern: critical layers exhibit both high fine-tuning sensitivity and heightened vulnerability to backdoor attacks; (3) We empirically validate cross-architectural and cross-task robustness—fine-tuning only critical layers accelerates domain adaptation convergence, while freezing them reduces backdoor attack success rates by up to 40%.

Technology Category

Application Category

📝 Abstract
Understanding how feature representations evolve across layers in large language models (LLMs) is key to improving their interpretability and robustness. While recent studies have identified critical layers linked to specific functions or behaviors, these efforts typically rely on data-dependent analyses of fine-tuned models, limiting their use to post-hoc settings. In contrast, we introduce a data-oblivious approach to identify intrinsic critical layers in pre-fine-tuned LLMs by analyzing representation dynamics via Centered Kernel Alignment(CKA). We show that layers with significant shifts in representation space are also those most affected during fine-tuning--a pattern that holds consistently across tasks for a given model. Our spectral analysis further reveals that these shifts are driven by changes in the top principal components, which encode semantic transitions from rationales to conclusions. We further apply these findings to two practical scenarios: efficient domain adaptation, where fine-tuning critical layers leads to greater loss reduction compared to non-critical layers; and backdoor defense, where freezing them reduces attack success rates by up to 40%.
Problem

Research questions and friction points this paper is trying to address.

Identify intrinsic critical layers in pre-fine-tuned LLMs using data-oblivious spectral analysis.
Understand semantic transitions in representation dynamics via principal component changes.
Apply critical layer insights to improve domain adaptation and backdoor defense.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-oblivious critical layer identification via CKA
Spectral analysis of top principal components shifts
Critical layers enhance domain adaptation and backdoor defense
🔎 Similar Papers
No similar papers found.