π€ AI Summary
Vision Transformers (ViTs) used as backbone architectures for vision foundation models often generate high-norm artifacts that degrade feature representation quality. In knowledge distillation, these artifacts dominate the loss function, causing student models to overfit spurious patterns while neglecting informative signalsβthus limiting performance gains. To address this, we propose a null-space-guided knowledge distillation framework. Leveraging LoRA adapters, we efficiently construct null-space perturbations of teacher features, decoupling artifact suppression from semantic information preservation. We further introduce an energy redistribution strategy to refine teacher representations with fine-grained control. Our approach breaks the inherent trade-off between denoising and fidelity, achieving state-of-the-art performance across multiple downstream vision tasks. Empirically, it significantly improves student model accuracy and enhances the clarity and interpretability of learned visual representations.
π Abstract
Vision Transformers are widely adopted as the backbone of vision foundation models, but they are known to produce high-norm artifacts that degrade representation quality. When knowledge distillation transfers these features to students, high-norm artifacts dominate the objective, so students overfit to artifacts and underweight informative signals, diminishing the gains from larger models. Prior work attempted to remove artifacts but encountered an inherent trade-off between artifact suppression and preserving informative signals from teachers. To address this, we introduce Singular Nullspace-Guided Energy Reallocation (SiNGER), a novel distillation framework that suppresses artifacts while preserving informative signals. The key idea is principled teacher feature refinement: during refinement, we leverage the nullspace-guided perturbation to preserve information while suppressing artifacts. Then, the refined teacher's features are distilled to a student. We implement this perturbation efficiently with a LoRA-based adapter that requires minimal structural modification. Extensive experiments show that oursname consistently improves student models, achieving state-of-the-art performance in multiple downstream tasks and producing clearer and more interpretable representations.