File size: 8,317 Bytes
8b0668c
647102b
 
 
 
 
 
 
 
 
 
 
 
 
8b0668c
647102b
 
 
8b0668c
068446f
 
 
 
 
 
 
 
 
 
 
5ca5837
 
 
068446f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ca5837
 
 
 
 
 
068446f
 
 
5ca5837
 
 
 
 
 
 
 
 
068446f
 
ab2ae17
 
99fd0c8
 
ab2ae17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99fd0c8
ab2ae17
8b0668c
068446f
 
5ca5837
 
068446f
 
 
 
 
 
 
 
 
 
5ca5837
 
 
 
068446f
 
5ca5837
 
 
 
 
 
ab2ae17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b0668c
 
 
e685761
647102b
3392950
4147dcb
26b2bba
76d648e
3f33424
 
 
 
 
 
 
 
 
 
 
 
76d648e
3392950
8f1df2b
3392950
26b2bba
3392950
37fa267
26b2bba
4147dcb
26b2bba
37fa267
26b2bba
db1c443
 
 
 
 
 
 
 
 
37fa267
26b2bba
37fa267
 
26b2bba
37fa267
 
 
 
 
 
 
 
 
 
e685761
 
 
 
 
 
 
 
 
 
 
 
 
 
37fa267
192186e
 
b9509a1
9fda193
 
 
 
 
 
 
 
 
5291917
9fda193
 
 
 
 
 
 
 
192186e
 
2c3bd59
192186e
 
 
 
 
 
 
 
 
 
 
 
e685761
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
---
language:
- bn
- en
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
- ur
license: cc-by-4.0
size_categories:
- 1M<n<10M
pretty_name: Pralekha
dataset_info:
- config_name: alignable
  features:
  - name: n_id
    dtype: string
  - name: doc_id
    dtype: string
  - name: lang
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: ben
    num_bytes: 651961117
    num_examples: 95813
  - name: eng
    num_bytes: 1048149692
    num_examples: 298111
  - name: guj
    num_bytes: 549286108
    num_examples: 67847
  - name: hin
    num_bytes: 1754308559
    num_examples: 204809
  - name: kan
    num_bytes: 567860764
    num_examples: 61998
  - name: mal
    num_bytes: 498894372
    num_examples: 67760
  - name: mar
    num_bytes: 961277740
    num_examples: 135301
  - name: ori
    num_bytes: 397642857
    num_examples: 46167
  - name: pan
    num_bytes: 872586190
    num_examples: 108459
  - name: tam
    num_bytes: 858335433
    num_examples: 149637
  - name: tel
    num_bytes: 914832899
    num_examples: 110077
  - name: urd
    num_bytes: 1199225480
    num_examples: 220425
  download_size: 3954199760
  dataset_size: 10274361211
- config_name: unalignable
  features:
  - name: n_id
    dtype: string
  - name: doc_id
    dtype: string
  - name: lang
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: eng
    num_bytes: 420307531
    num_examples: 149055
  - name: tel
    num_bytes: 372531108
    num_examples: 55038
  - name: mar
    num_bytes: 428715921
    num_examples: 67650
  - name: guj
    num_bytes: 214351582
    num_examples: 33923
  - name: hin
    num_bytes: 683869386
    num_examples: 102404
  - name: ori
    num_bytes: 111986274
    num_examples: 23083
  - name: tam
    num_bytes: 614171222
    num_examples: 74818
  - name: urd
    num_bytes: 644995094
    num_examples: 110212
  - name: kan
    num_bytes: 189633814
    num_examples: 30999
  - name: mal
    num_bytes: 192394324
    num_examples: 33880
  - name: ben
    num_bytes: 273391595
    num_examples: 47906
  - name: pan
    num_bytes: 328564948
    num_examples: 54229
  download_size: 1855179179
  dataset_size: 4474912799
configs:
- config_name: alignable
  data_files:
  - split: ben
    path: alignable/ben-*
  - split: eng
    path: alignable/eng-*
  - split: guj
    path: alignable/guj-*
  - split: hin
    path: alignable/hin-*
  - split: kan
    path: alignable/kan-*
  - split: mal
    path: alignable/mal-*
  - split: mar
    path: alignable/mar-*
  - split: ori
    path: alignable/ori-*
  - split: pan
    path: alignable/pan-*
  - split: tam
    path: alignable/tam-*
  - split: tel
    path: alignable/tel-*
  - split: urd
    path: alignable/urd-*
- config_name: unalignable
  data_files:
  - split: eng
    path: unalignable/eng-*
  - split: tel
    path: unalignable/tel-*
  - split: mar
    path: unalignable/mar-*
  - split: guj
    path: unalignable/guj-*
  - split: hin
    path: unalignable/hin-*
  - split: ori
    path: unalignable/ori-*
  - split: tam
    path: unalignable/tam-*
  - split: urd
    path: unalignable/urd-*
  - split: kan
    path: unalignable/kan-*
  - split: mal
    path: unalignable/mal-*
  - split: ben
    path: unalignable/ben-*
  - split: pan
    path: unalignable/pan-*
tags:
- data-mining
- document-alignment
- parallel-corpus
---

# Pralekha: An Indic Document Alignment Evaluation Benchmark

<div style="display: flex; gap: 10px;">
  <a href="https://arxiv.org/abs/2411.19096">
    <img src="https://img.shields.io/badge/arXiv-2411.19096-B31B1B" alt="arXiv">
  </a>
  <a href="https://huggingface.co/datasets/ai4bharat/Pralekha">
    <img src="https://img.shields.io/badge/huggingface-Pralekha-yellow" alt="HuggingFace">
  </a>
  <a href="https://github.com/AI4Bharat/Pralekha">
    <img src="https://img.shields.io/badge/github-Pralekha-blue" alt="GitHub">
  </a>
  <a href="https://creativecommons.org/licenses/by/4.0/">
    <img src="https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey" alt="License: CC BY 4.0">
  </a>
</div>

**PRALEKHA** is a large-scale parallel document dataset for evaluating cross-lingual document alignment techniques across **11 Indic languages** and English. It comprises over **3 million** document pairs, with **1.5 million** being English-centric.  

---

## Dataset Description

**PRALEKHA** covers 12 languages—Bengali (`ben`), Gujarati (`guj`), Hindi (`hin`), Kannada (`kan`), Malayalam (`mal`), Marathi (`mar`), Odia (`ori`), Punjabi (`pan`), Tamil (`tam`), Telugu (`tel`), Urdu (`urd`), and English (`eng`). It includes a mixture of high- and medium-resource languages, covering 11 different scripts. The dataset spans two broad domains: **news bulletins** and **podcast scripts**, offering both written and spoken forms of data. All the data is human-written or human-verified, ensuring high quality.

The dataset has a **1:2 ratio of aligned to unaligned document pairs**, making it ideal for benchmarking cross-lingual document alignment techniques.

### Data Fields

Each data sample includes:

- **`n_id`:** Unique identifier for aligned document pairs.
- **`doc_id`:** Unique identifier for individual documents.
- **`lang`:** Language of the document (ISO-3 code).
- **`text`:** The textual content of the document.

### Data Sources

1. **News Bulletins:** Data was custom-scraped from the [Indian Press Information Bureau (PIB)](https://pib.gov.in) website. Documents were aligned by matching bulletin IDs, which interlink bulletins across languages.
2. **Podcast Scripts:** Data was sourced from [Mann Ki Baat](https://www.pmindia.gov.in/en/mann-ki-baat), a radio program hosted by the Indian Prime Minister. This program, originally spoken in Hindi, was manually transcribed and translated into various Indian languages.

### Dataset Size Statistics

| Split         | Number of Documents | Size (bytes)       |
|---------------|---------------------|--------------------|
| **Aligned**   | 1,566,404           | 10,274,361,211     |
| **Unaligned** | 783,197             | 4,466,506,637      |
| **Total**     | 2,349,601           | 14,740,867,848     |

### Language-wise Statistics

| Language (`ISO-3`) | Aligned Documents | Unaligned Documents | Total Documents |
|---------------------|-------------------|---------------------|-----------------|
| Bengali (`ben`)     | 95,813            | 47,906              | 143,719         |
| English (`eng`)     | 298,111           | 149,055             | 447,166         |
| Gujarati (`guj`)    | 67,847            | 33,923              | 101,770         |
| Hindi (`hin`)       | 204,809           | 102,404             | 307,213         |
| Kannada (`kan`)     | 61,998            | 30,999              | 92,997          |
| Malayalam (`mal`)   | 67,760            | 33,880              | 101,640         |
| Marathi (`mar`)     | 135,301           | 67,650              | 202,951         |
| Odia (`ori`)        | 46,167            | 23,083              | 69,250          |
| Punjabi (`pan`)     | 108,459           | 54,229              | 162,688         |
| Tamil (`tam`)       | 149,637           | 74,818              | 224,455         |
| Telugu (`tel`)      | 110,077           | 55,038              | 165,115         |
| Urdu (`urd`)        | 220,425           | 110,212             | 330,637         |

---

# Usage

You can use the following commands to download and explore the dataset:

## Downloading the Entire Dataset
```python
from datasets import load_dataset

dataset = load_dataset("ai4bharat/pralekha")
```
## Downloading a Specific Split
``` python
from datasets import load_dataset

dataset = load_dataset("ai4bharat/pralekha", split="<split_name>")
# For example: dataset = load_dataset("ai4bharat/pralekha", split="aligned")
```
---

## License

This dataset is released under the [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/) license.

---

## Contact

For any questions or feedback, please contact:

- Raj Dabre ([raj.dabre@cse.iitm.ac.in](mailto:raj.dabre@cse.iitm.ac.in))  
- Sanjay Suryanarayanan ([sanj.ai@outlook.com](mailto:sanj.ai@outlook.com))  
- Haiyue Song ([haiyue.song@nict.go.jp](mailto:haiyue.song@nict.go.jp))  
- Mohammed Safi Ur Rahman Khan ([safikhan2000@gmail.com](mailto:safikhan2000@gmail.com))  

Please get in touch with us for any copyright concerns.