--- tags: - ocr - document-processing - dots-ocr - multilingual - markdown - uv-script - generated dataset_info: features: - name: document_id dtype: string - name: page_number dtype: string - name: image dtype: image - name: text dtype: string - name: alto_xml dtype: string - name: has_image dtype: bool - name: has_alto dtype: bool - name: markdown dtype: string - name: inference_info dtype: string splits: - name: train num_bytes: 734853 num_examples: 10 download_size: 714829 dataset_size: 734853 configs: - config_name: default data_files: - split: train path: data/train-* --- # Document OCR using dots.ocr This dataset contains OCR results from images in [davanstrien/playbills-pdf-images-text](https://huggingface.co/datasets/davanstrien/playbills-pdf-images-text) using DoTS.ocr, a compact 1.7B multilingual model. ## Processing Details - **Source Dataset**: [davanstrien/playbills-pdf-images-text](https://huggingface.co/datasets/davanstrien/playbills-pdf-images-text) - **Model**: [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) - **Number of Samples**: 10 - **Processing Time**: 3.8 min - **Processing Date**: 2025-10-07 21:14 UTC ### Configuration - **Image Column**: `image` - **Output Column**: `markdown` - **Dataset Split**: `train` - **Batch Size**: 32 - **Prompt Mode**: layout-all - **Max Model Length**: 32,768 tokens - **Max Output Tokens**: 8,192 - **GPU Memory Utilization**: 80.0% ## Model Information DoTS.ocr is a compact multilingual document parsing model that excels at: - 🌍 **100+ Languages** - Multilingual document support - 📊 **Table extraction** - Structured data recognition - 📐 **Formulas** - Mathematical notation preservation - 📝 **Layout-aware** - Reading order and structure preservation - 🎯 **Compact** - Only 1.7B parameters ## Dataset Structure The dataset contains all original columns plus: - `markdown`: The extracted text in markdown format - `inference_info`: JSON list tracking all OCR models applied to this dataset ## Usage ```python from datasets import load_dataset import json # Load the dataset dataset = load_dataset("{output_dataset_id}", split="train") # Access the markdown text for example in dataset: print(example["markdown"]) break # View all OCR models applied to this dataset inference_info = json.loads(dataset[0]["inference_info"]) for info in inference_info: print(f"Column: {info['column_name']} - Model: {info['model_id']}") ``` ## Reproduction This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DoTS OCR script: ```bash uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \ davanstrien/playbills-pdf-images-text \ \ --image-column image \ --batch-size 32 \ --prompt-mode layout-all \ --max-model-len 32768 \ --max-tokens 8192 \ --gpu-memory-utilization 0.8 ``` Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)