🤖 AI Summary
This work addresses the challenge of simultaneously achieving high compression ratios and high OCR accuracy in long-text visual context compression. We propose an efficient compression framework based on optical 2D mapping. Methodologically, it integrates high-resolution low-activation encoding, a sparse Mixture-of-Experts (MoE) decoder (DeepSeek3B-MoE-A570M), and a deep encoder (DeepEncoder), significantly reducing visual token count while enabling precise text reconstruction. Our key contribution is the first integration of optical 2D mapping with low-activation representations—enabling efficient compression of historical documents and facilitating studies on large-model memory forgetting mechanisms. Experiments demonstrate state-of-the-art OCR performance: surpassing GOT-OCR2.0 with only 100 visual tokens; outperforming MinerU2.0 at ≤800 tokens; achieving >200K pages/day processing on a single GPU; and maintaining 97% OCR accuracy even at a 10× compression ratio.
📝 Abstract
We present DeepSeek-OCR as an initial investigation into the feasibility of compressing long contexts via optical 2D mapping. DeepSeek-OCR consists of two components: DeepEncoder and DeepSeek3B-MoE-A570M as the decoder. Specifically, DeepEncoder serves as the core engine, designed to maintain low activations under high-resolution input while achieving high compression ratios to ensure an optimal and manageable number of vision tokens. Experiments show that when the number of text tokens is within 10 times that of vision tokens (i.e., a compression ratio < 10x), the model can achieve decoding (OCR) precision of 97%. Even at a compression ratio of 20x, the OCR accuracy still remains at about 60%. This shows considerable promise for research areas such as historical long-context compression and memory forgetting mechanisms in LLMs. Beyond this, DeepSeek-OCR also demonstrates high practical value. On OmniDocBench, it surpasses GOT-OCR2.0 (256 tokens/page) using only 100 vision tokens, and outperforms MinerU2.0 (6000+ tokens per page on average) while utilizing fewer than 800 vision tokens. In production, DeepSeek-OCR can generate training data for LLMs/VLMs at a scale of 200k+ pages per day (a single A100-40G). Codes and model weights are publicly accessible at http://github.com/deepseek-ai/DeepSeek-OCR.