SiNGER: A Clearer Voice Distills Vision Transformers Further

πŸ“… 2025-09-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Vision Transformers (ViTs) used as backbone architectures for vision foundation models often generate high-norm artifacts that degrade feature representation quality. In knowledge distillation, these artifacts dominate the loss function, causing student models to overfit spurious patterns while neglecting informative signalsβ€”thus limiting performance gains. To address this, we propose a null-space-guided knowledge distillation framework. Leveraging LoRA adapters, we efficiently construct null-space perturbations of teacher features, decoupling artifact suppression from semantic information preservation. We further introduce an energy redistribution strategy to refine teacher representations with fine-grained control. Our approach breaks the inherent trade-off between denoising and fidelity, achieving state-of-the-art performance across multiple downstream vision tasks. Empirically, it significantly improves student model accuracy and enhances the clarity and interpretability of learned visual representations.

Technology Category

Application Category

πŸ“ Abstract
Vision Transformers are widely adopted as the backbone of vision foundation models, but they are known to produce high-norm artifacts that degrade representation quality. When knowledge distillation transfers these features to students, high-norm artifacts dominate the objective, so students overfit to artifacts and underweight informative signals, diminishing the gains from larger models. Prior work attempted to remove artifacts but encountered an inherent trade-off between artifact suppression and preserving informative signals from teachers. To address this, we introduce Singular Nullspace-Guided Energy Reallocation (SiNGER), a novel distillation framework that suppresses artifacts while preserving informative signals. The key idea is principled teacher feature refinement: during refinement, we leverage the nullspace-guided perturbation to preserve information while suppressing artifacts. Then, the refined teacher's features are distilled to a student. We implement this perturbation efficiently with a LoRA-based adapter that requires minimal structural modification. Extensive experiments show that oursname consistently improves student models, achieving state-of-the-art performance in multiple downstream tasks and producing clearer and more interpretable representations.
Problem

Research questions and friction points this paper is trying to address.

Vision Transformers produce high-norm artifacts degrading representation quality
Knowledge distillation causes students to overfit artifacts and underweight informative signals
Existing methods face trade-off between artifact suppression and signal preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nullspace-guided perturbation suppresses artifacts
LoRA-based adapter enables efficient feature refinement
Distillation framework preserves informative teacher signals
πŸ”Ž Similar Papers
No similar papers found.
G
Geunhyeok Yu
Department of Software Convergence, Kyung Hee University, Republic of Korea
S
Sunjae Jeong
Department of Software Convergence, Kyung Hee University, Republic of Korea
Y
Yoonyoung Choi
Department of Software Convergence, Kyung Hee University, Republic of Korea
J
Jaeseung Kim
MOBILTECH CO., LTD, Republic of Korea
Hyoseok Hwang
Hyoseok Hwang
Kyung Hee University
computer visionmachine learningrobotics