Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -301,12 +301,12 @@ configs:
|
|
| 301 |
- split: urd
|
| 302 |
path: unalignable/urd-*
|
| 303 |
tags:
|
| 304 |
-
- data-mining
|
| 305 |
-
- document-alignment
|
| 306 |
- parallel-corpus
|
|
|
|
|
|
|
| 307 |
---
|
| 308 |
|
| 309 |
-
# Pralekha:
|
| 310 |
|
| 311 |
<div style="display: flex; gap: 10px;">
|
| 312 |
<a href="https://arxiv.org/abs/2411.19096">
|
|
@@ -323,41 +323,75 @@ tags:
|
|
| 323 |
</a>
|
| 324 |
</div>
|
| 325 |
|
| 326 |
-
**
|
| 327 |
|
| 328 |
---
|
| 329 |
|
| 330 |
## Dataset Description
|
| 331 |
|
| 332 |
-
**
|
|
|
|
|
|
|
|
|
|
|
|
|
| 333 |
|
| 334 |
-
The dataset has a **1:2 ratio of aligned to unaligned document pairs**, making it ideal for benchmarking cross-lingual document alignment techniques.
|
| 335 |
|
| 336 |
-
##
|
| 337 |
|
| 338 |
-
|
| 339 |
|
| 340 |
-
- **`n_id`:** Unique identifier for
|
| 341 |
- **`doc_id`:** Unique identifier for individual documents.
|
| 342 |
-
- **`lang`:** Language of the document (ISO-3 code).
|
| 343 |
- **`text`:** The textual content of the document.
|
| 344 |
|
| 345 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 346 |
|
| 347 |
-
|
| 348 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 349 |
|
| 350 |
-
##
|
| 351 |
|
| 352 |
| Split | Number of Documents | Size (bytes) |
|
| 353 |
|---------------|---------------------|--------------------|
|
| 354 |
-
| **
|
| 355 |
-
| **
|
| 356 |
| **Total** | 2,349,601 | 14,740,867,848 |
|
| 357 |
|
| 358 |
-
##
|
| 359 |
|
| 360 |
-
| Language (`ISO-3`) |
|
| 361 |
|---------------------|-------------------|---------------------|-----------------|
|
| 362 |
| Bengali (`ben`) | 95,813 | 47,906 | 143,719 |
|
| 363 |
| English (`eng`) | 298,111 | 149,055 | 447,166 |
|
|
@@ -374,22 +408,15 @@ Each data sample includes:
|
|
| 374 |
|
| 375 |
---
|
| 376 |
|
| 377 |
-
#
|
| 378 |
-
|
| 379 |
-
You can use the following commands to download and explore the dataset:
|
| 380 |
-
|
| 381 |
-
## Downloading the Entire Dataset
|
| 382 |
-
```python
|
| 383 |
-
from datasets import load_dataset
|
| 384 |
-
|
| 385 |
-
dataset = load_dataset("ai4bharat/pralekha")
|
| 386 |
```
|
| 387 |
-
|
| 388 |
-
|
| 389 |
-
|
| 390 |
-
|
| 391 |
-
|
| 392 |
-
|
| 393 |
```
|
| 394 |
---
|
| 395 |
|
|
@@ -408,4 +435,4 @@ For any questions or feedback, please contact:
|
|
| 408 |
- Haiyue Song ([haiyue.song@nict.go.jp](mailto:haiyue.song@nict.go.jp))
|
| 409 |
- Mohammed Safi Ur Rahman Khan ([safikhan2000@gmail.com](mailto:safikhan2000@gmail.com))
|
| 410 |
|
| 411 |
-
Please get in touch with us for any copyright concerns.
|
|
|
|
| 301 |
- split: urd
|
| 302 |
path: unalignable/urd-*
|
| 303 |
tags:
|
|
|
|
|
|
|
| 304 |
- parallel-corpus
|
| 305 |
+
- document-alignment
|
| 306 |
+
- machine-translation
|
| 307 |
---
|
| 308 |
|
| 309 |
+
# Pralekha: Cross-Lingual Document Alignment for Indic Languages
|
| 310 |
|
| 311 |
<div style="display: flex; gap: 10px;">
|
| 312 |
<a href="https://arxiv.org/abs/2411.19096">
|
|
|
|
| 323 |
</a>
|
| 324 |
</div>
|
| 325 |
|
| 326 |
+
**Pralekha** is a large-scale parallel document dataset for Cross-Lingual Document Alignment (CLDA) and Machine Translation (MT) across **11 Indic languages** and English. It comprises over **3 million** document pairs, with **1.5 million** being English-centric.
|
| 327 |
|
| 328 |
---
|
| 329 |
|
| 330 |
## Dataset Description
|
| 331 |
|
| 332 |
+
**Pralekha** covers 12 languages—Bengali (`ben`), Gujarati (`guj`), Hindi (`hin`), Kannada (`kan`), Malayalam (`mal`), Marathi (`mar`), Odia (`ori`), Punjabi (`pan`), Tamil (`tam`), Telugu (`tel`), Urdu (`urd`), and English (`eng`). It includes a mixture of high- and medium-resource languages, covering 11 different scripts. The dataset spans two broad domains: **News Bulletins** ([Indian Press Information Bureau (PIB)](https://pib.gov.in)) and **Podcast Scripts** ([Mann Ki Baat](https://www.pmindia.gov.in/en/mann-ki-baat)), offering both written and spoken forms of data. All the data is human-written or human-verified, ensuring high quality.
|
| 333 |
+
|
| 334 |
+
While this accounts for `alignable` (parallel) documents, In real-world scenarios, multilingual corpora often include `unalignable` documents. To simulate this for CLDA evaluation, we sample `unalignable` documents from [Sangraha Unverified](https://huggingface.co/datasets/ai4bharat/sangraha/viewer/unverified), selecting 50% of Pralekha’s size to maintain a 1:2 ratio of `unalignable` to `alignable` documents.
|
| 335 |
+
|
| 336 |
+
For Machine Translation (MT) tasks, we first randomly sample 1,000 documents per English-Indic language pair, ensuring a good distribution of varying document lengths. After excluding these sampled documents, we use the remaining documents for training document-level machine translation models.
|
| 337 |
|
|
|
|
| 338 |
|
| 339 |
+
## Data Fields
|
| 340 |
|
| 341 |
+
### Alignable & Unalignable Set:
|
| 342 |
|
| 343 |
+
- **`n_id`:** Unique identifier for `alignable` document pairs (Random `n_id`'s are assigned for the `unalignable` set.)
|
| 344 |
- **`doc_id`:** Unique identifier for individual documents.
|
| 345 |
+
- **`lang`:** Language of the document (ISO 639-3 code).
|
| 346 |
- **`text`:** The textual content of the document.
|
| 347 |
|
| 348 |
+
### Train, Dev & Test Set:
|
| 349 |
+
|
| 350 |
+
- **`src_lang`:** Source Language (eng)
|
| 351 |
+
- **`src_text`:** Source Language Text
|
| 352 |
+
- **`tgt_lang`:** Target Language ((ISO 639-3 code)
|
| 353 |
+
- **`tgt_text`:** Target Language Text
|
| 354 |
+
|
| 355 |
+
|
| 356 |
+
## Usage
|
| 357 |
+
|
| 358 |
+
You can load specific **subsets** and **splits** from this dataset using the `datasets` library.
|
| 359 |
+
|
| 360 |
+
### Load an entire subset
|
| 361 |
+
|
| 362 |
+
```python
|
| 363 |
+
from datasets import load_dataset
|
| 364 |
+
|
| 365 |
+
dataset = load_dataset("ai4bharat/Pralekha", data_dir="<subset>")
|
| 366 |
+
# <subset> = alignable, unalignable, train, dev & test.
|
| 367 |
+
```
|
| 368 |
+
|
| 369 |
+
### Load a specific split within a subset
|
| 370 |
|
| 371 |
+
```python
|
| 372 |
+
from datasets import load_dataset
|
| 373 |
+
|
| 374 |
+
dataset = load_dataset("ai4bharat/Pralekha", data_dir="<subset>/<lang>")
|
| 375 |
+
# <subset> = alignable, unalignable ; <lang> = ben, eng, guj, hin, kan, mal, mar, ori, pan, tam, tel, urd.
|
| 376 |
+
```
|
| 377 |
+
```python
|
| 378 |
+
from datasets import load_dataset
|
| 379 |
+
|
| 380 |
+
dataset = load_dataset("ai4bharat/Pralekha", data_dir="<subset>/eng_<lang>")
|
| 381 |
+
# <subset> = train, dev & test ; <lang> = ben, guj, hin, kan, mal, mar, ori, pan, tam, tel, urd.
|
| 382 |
+
```
|
| 383 |
|
| 384 |
+
## Data Size Statistics
|
| 385 |
|
| 386 |
| Split | Number of Documents | Size (bytes) |
|
| 387 |
|---------------|---------------------|--------------------|
|
| 388 |
+
| **Alignable** | 1,566,404 | 10,274,361,211 |
|
| 389 |
+
| **Unalignable** | 783,197 | 4,466,506,637 |
|
| 390 |
| **Total** | 2,349,601 | 14,740,867,848 |
|
| 391 |
|
| 392 |
+
## Language-wise Statistics
|
| 393 |
|
| 394 |
+
| Language (`ISO-3`) | Alignable Documents | Unalignable Documents | Total Documents |
|
| 395 |
|---------------------|-------------------|---------------------|-----------------|
|
| 396 |
| Bengali (`ben`) | 95,813 | 47,906 | 143,719 |
|
| 397 |
| English (`eng`) | 298,111 | 149,055 | 447,166 |
|
|
|
|
| 408 |
|
| 409 |
---
|
| 410 |
|
| 411 |
+
# Citation
|
| 412 |
+
If you use Pralekha in your work, please cite us:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 413 |
```
|
| 414 |
+
@article{suryanarayanan2024pralekha,
|
| 415 |
+
title={Pralekha: An Indic Document Alignment Evaluation Benchmark},
|
| 416 |
+
author={Suryanarayanan, Sanjay and Song, Haiyue and Khan, Mohammed Safi Ur Rahman and Kunchukuttan, Anoop and Khapra, Mitesh M and Dabre, Raj},
|
| 417 |
+
journal={arXiv preprint arXiv:2411.19096},
|
| 418 |
+
year={2024}
|
| 419 |
+
}
|
| 420 |
```
|
| 421 |
---
|
| 422 |
|
|
|
|
| 435 |
- Haiyue Song ([haiyue.song@nict.go.jp](mailto:haiyue.song@nict.go.jp))
|
| 436 |
- Mohammed Safi Ur Rahman Khan ([safikhan2000@gmail.com](mailto:safikhan2000@gmail.com))
|
| 437 |
|
| 438 |
+
Please get in touch with us for any copyright concerns.
|