Deepseek’s new OCR system processes texts as images and compresses them up to 10 times. This technology, capable of analyzing 33 million pages in a day, allows AI to read much longer documents.
Deepseek, a Chinese artificial intelligence company, is attracting attention with its new OCR (Optical Character Recognition) system developed for more efficient processing of text-based documents. The system compresses image-based texts, enabling AI models to process much longer documents without hitting their memory limits.
Processing Text as Visual Data
According to Deepseek’s technical report, the system analyzes text data in image format instead of processing it directly. This approach significantly reduces the computational load. The new OCR system can compress texts by up to 10 times while retaining 97% of the information.
As known, large language models represent text as tokens, with each token containing a few characters. Researchers are working to develop models that can process long documents and conversations exceeding millions of tokens, thereby expanding the context window. However, as the number of tokens that can be processed simultaneously increases, so do the computational costs. Thus, a large token capacity prevents the model’s memory from filling up even with long documents, but it increases the cost. Deepseek’s OCR solution, however, processes very long content as if it were an image, effectively viewing the content as pixels.
Seeing Long Texts as Pixels
The core of the system consists of two main components: DeepEncoder and Deepseek3B-MoE. DeepEncoder, which handles the image processing, operates with 380 million parameters. Deepseek3B-MoE, responsible for text generation, has 570 million active parameters. DeepEncoder combines Meta’s 80-million-parameter SAM (Segment Anything Model) and OpenAI’s 300-million-parameter CLIP model. An intermediary 16x compressor significantly reduces the image data, increasing processing speed. For example, 4,096 tokens of a $1,024 \times 1,024$ pixel image are reduced to only 256 tokens after compression.
Deepseek OCR can operate using between 64 and 400 “vision tokens,” depending on the resolution. This number significantly lightens operations that typically require thousands of tokens in classic OCR systems. In OmniDocBench tests, the system outperformed GOT-OCR 2.0 using only 100 vision tokens. It also surpassed the performance of MinerU 2.0, which required over 6,000 tokens, while operating under 800 tokens.
- The system, optimized for different document types, uses 64 tokens for simple presentations, 100 tokens for books and reports, and 800 tokens using a special mode called “Gundam mode” for complex newspapers.
- Deepseek OCR can process not only text but also complex visual elements like diagrams, chemical formulas, and geometric shapes. Furthermore, it works in approximately 100 languages, can preserve formatting, and can generate plain text or general visual descriptions if desired.
Processes 33 Million Pages a Day
Approximately 30 million PDF pages were used to train the system. 25 million of this data consisted of English and Chinese documents, and the rest comprised 10 million synthetic diagrams, 5 million chemical formulas, and 1 million geometric shapes.
In real-world use, Deepseek OCR achieves a very high processing capacity. The system can process over 200,000 documents a day on a single Nvidia A100 GPU. With 20 servers, each housing eight A100 GPUs, this capacity increases to 33 million pages per day. This speed has the potential to greatly facilitate the production of training data for new AI models. Both the code and model weights are publicly available (accessible via the source section).
You Might Also Like;
- We Selected 10 Series Similar to Stranger Things for Those Who Love It
- Where and How is Silver Used in Electric Vehicles?
- Hyundai Unveils Its Multi-Purpose Wheeled Robot
