Jasper-Token-Compression-600M Technical Report

๐Ÿ“… 2025-11-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of simultaneously achieving high efficiency and quality in bilingual (Chineseโ€“English) text embedding models. We propose a lightweight modeling approach that integrates knowledge distillation with dynamic token compression. Our core innovation is a learnable, one-dimensional convolution-based token compression module that enables input-adaptive, dynamic compression ratios. To preserve semantic discriminability, we enhance compressed representations via contrastive learning and jointly optimize the model on bilingual corpora. Experimental results demonstrate that our 600M-parameter model matches the performance of an 8B-parameter baseline across multiple bilingual retrieval and semantic similarity benchmarks, while achieving a 2.3ร— speedup in inference latency and reducing memory footprint by 64%. It significantly outperforms conventional models of comparable size.

Technology Category

Application Category

๐Ÿ“ Abstract
This technical report presents the training methodology and evaluation results of the open-source Jasper-Token-Compression-600M model, released in November 2025. Building on previous distillation-based recipes from the English Stella and Jasper models, we successfully extend this approach to a bilingual (English and Chinese) domain, further enhancing model performance through the incorporation of contrastive learning. A key innovation of our model is the introduction of a one-dimensional convolution-based token compression module. We dynamically adjust the compression rate during training, enabling the model to learn more robust and efficient compressed text representations. By combining knowledge distillation with token compression techniques, we achieve significant improvements in both embedding quality and inference efficiency. Our model performs with higher efficiency than a traditional 0.6B model while achieving performance comparable to that of an 8B model. For more information on the model release, visit: https://huggingface.co/infgrad/Jasper-Token-Compression-600M.
Problem

Research questions and friction points this paper is trying to address.

Develop bilingual token compression model for English and Chinese
Enhance embedding quality and inference efficiency simultaneously
Achieve 8B model performance with 0.6B model efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses knowledge distillation for bilingual text compression
Introduces dynamic one-dimensional convolution token compression
Combines contrastive learning with adjustable compression rates
๐Ÿ”Ž Similar Papers
No similar papers found.