--- language: - bn - en - gu - hi - kn - ml - mr - or - pa - ta - te - ur license: cc-by-4.0 size_categories: - 1M arXiv HuggingFace GitHub License: CC BY 4.0 **Pralekha** is a large-scale parallel document dataset spanning across **11 Indic languages** and **English**. It comprises over **3 million** document pairs, with **1.5 million** being English-Indic Pairs. This dataset serves both as a benchmark for evaluating **Cross-Lingual Document Alignment (CLDA)** techniques and as a domain-specific parallel corpus for training document-level **Machine Translation (MT)** models in Indic Languages. --- ## Dataset Description **Pralekha** covers 12 languages—Bengali (`ben`), Gujarati (`guj`), Hindi (`hin`), Kannada (`kan`), Malayalam (`mal`), Marathi (`mar`), Odia (`ori`), Punjabi (`pan`), Tamil (`tam`), Telugu (`tel`), Urdu (`urd`), and English (`eng`). It includes a mixture of high- and medium-resource languages, covering 11 different scripts. The dataset spans two broad domains: **News Bulletins** ([Indian Press Information Bureau (PIB)](https://pib.gov.in)) and **Podcast Scripts** ([Mann Ki Baat (MKB)](https://www.pmindia.gov.in/en/mann-ki-baat)), offering both written and spoken forms of data. All the data is human-written or human-verified, ensuring high quality. While this accounts for `alignable` (parallel) documents, In real-world scenarios, multilingual corpora often include `unalignable` documents. To simulate this for CLDA evaluation, we sample `unalignable` documents from [Sangraha Unverified](https://huggingface.co/datasets/ai4bharat/sangraha/viewer/unverified), selecting 50% of Pralekha’s size to maintain a 1:2 ratio of `unalignable` to `alignable` documents. For Machine Translation (MT) tasks, we first randomly sample 1,000 documents from the `alignable` subset per English-Indic language pair for each development (dev) and test set, ensuring a good distribution of varying document lengths. After excluding these sampled documents, we use the remaining documents as the training set for training document-level machine translation models. --- ## Data Fields ### Alignable & Unalignable Set: - **`n_id`:** Unique identifier for `alignable` document pairs (Random `n_id`'s are assigned for the `unalignable` set.) - **`doc_id`:** Unique identifier for individual documents. - **`lang`:** Language of the document (ISO 639-3 code). - **`text`:** The textual content of the document. ### Train, Dev & Test Set: - **`src_lang`:** Source Language (eng) - **`src_text`:** Source Language Text - **`tgt_lang`:** Target Language (ISO 639-3 code) - **`tgt_text`:** Target Language Text --- ## Usage You can load specific **subsets** and **splits** from this dataset using the `datasets` library. ### Load an entire subset ```python from datasets import load_dataset dataset = load_dataset("ai4bharat/Pralekha", data_dir="") # = alignable, unalignable, train, dev & test. ``` ### Load a specific split within a subset ```python from datasets import load_dataset dataset = load_dataset("ai4bharat/Pralekha", data_dir="/") # = alignable, unalignable ; = ben, eng, guj, hin, kan, mal, mar, ori, pan, tam, tel, urd. ``` ```python from datasets import load_dataset dataset = load_dataset("ai4bharat/Pralekha", data_dir="/eng_") # = train, dev & test ; = ben, guj, hin, kan, mal, mar, ori, pan, tam, tel, urd. ``` --- ## Data Size Statistics | Split | Number of Documents | Size (bytes) | |---------------|---------------------|--------------------| | **Alignable** | 1,566,404 | 10,274,361,211 | | **Unalignable** | 783,197 | 4,466,506,637 | | **Total** | 2,349,601 | 14,740,867,848 | ## Language-wise Statistics | Language (`ISO-3`) | Alignable Documents | Unalignable Documents | Total Documents | |---------------------|-------------------|---------------------|-----------------| | Bengali (`ben`) | 95,813 | 47,906 | 143,719 | | English (`eng`) | 298,111 | 149,055 | 447,166 | | Gujarati (`guj`) | 67,847 | 33,923 | 101,770 | | Hindi (`hin`) | 204,809 | 102,404 | 307,213 | | Kannada (`kan`) | 61,998 | 30,999 | 92,997 | | Malayalam (`mal`) | 67,760 | 33,880 | 101,640 | | Marathi (`mar`) | 135,301 | 67,650 | 202,951 | | Odia (`ori`) | 46,167 | 23,083 | 69,250 | | Punjabi (`pan`) | 108,459 | 54,229 | 162,688 | | Tamil (`tam`) | 149,637 | 74,818 | 224,455 | | Telugu (`tel`) | 110,077 | 55,038 | 165,115 | | Urdu (`urd`) | 220,425 | 110,212 | 330,637 | --- # Citation If you use Pralekha in your work, please cite us: ``` @inproceedings{suryanarayanan-etal-2025-pralekha, title = "{PRALEKHA}: Cross-Lingual Document Alignment for {I}ndic Languages", author = "Suryanarayanan, Sanjay and Song, Haiyue and Khan, Mohammed Safi Ur Rahman and Kunchukuttan, Anoop and Dabre, Raj", booktitle = "Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics", month = dec, year = "2025", address = "Mumbai, India", publisher = "The Asian Federation of Natural Language Processing and The Association for Computational Linguistics", url = "https://aclanthology.org/2025.ijcnlp-long.37/", pages = "662--676" } ``` ## License This dataset is released under the [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/) license. ## Contact For any questions or feedback, please contact: - Sanjay Suryanarayanan ([sanj.ai@outlook.com](mailto:sanj.ai@outlook.com)) - Haiyue Song ([haiyue.song@nict.go.jp](mailto:haiyue.song@nict.go.jp)) - Raj Dabre ([raj.dabre@cse.iitm.ac.in](mailto:raj.dabre@cse.iitm.ac.in)) Please get in touch with us for any copyright concerns.