🤖 AI Summary
This work addresses the challenge of deploying CLIP models in resource-constrained environments due to their high computational and memory demands, where existing compression methods often suffer from degraded feature representation under extreme compression. To this end, we propose CLIP-Map, a novel compression framework that introduces a learnable structured mapping paradigm. By integrating full mapping with Kronecker decomposition, CLIP-Map achieves efficient compression while preserving the original weight information. Furthermore, a diagonal inheritance initialization mechanism is designed to mitigate distribution shift during compression. Extensive experiments demonstrate that CLIP-Map consistently outperforms existing selective compression approaches across various compression ratios, with particularly significant performance gains observed under high compression rates.
📝 Abstract
Contrastive Language-Image Pre-training (CLIP) has achieved widely applications in various computer vision tasks, e.g., text-to-image generation, Image-Text retrieval and Image captioning. However, CLIP suffers from high memory and computation cost, which prohibits its usage to the resource-limited application scenarios. Existing CLIP compression methods typically reduce the size of pre-trained CLIP weights by selecting their subset as weight inheritance for further retraining via mask optimization or important weight measurement. However, these select-based weight inheritance often compromises the feature presentation ability, especially on the extreme compression. In this paper, we propose a novel mapping-based CLIP compression framework, CLIP-Map. It leverages learnable matrices to map and combine pretrained weights by Full-Mapping with Kronecker Factorization, aiming to preserve as much information from the original weights as possible. To mitigate the optimization challenges introduced by the learnable mapping, we propose Diagonal Inheritance Initialization to reduce the distribution shifting problem for efficient and effective mapping learning. Extensive experimental results demonstrate that the proposed CLIP-Map outperforms select-based frameworks across various compression ratios, with particularly significant gains observed under high compression settings.