Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -335,6 +335,7 @@ While this accounts for `alignable` (parallel) documents, In real-world scenario
|
|
| 335 |
|
| 336 |
For Machine Translation (MT) tasks, we first randomly sample 1,000 documents per English-Indic language pair, ensuring a good distribution of varying document lengths. After excluding these sampled documents, we use the remaining documents for training document-level machine translation models.
|
| 337 |
|
|
|
|
| 338 |
|
| 339 |
## Data Fields
|
| 340 |
|
|
@@ -352,6 +353,7 @@ For Machine Translation (MT) tasks, we first randomly sample 1,000 documents per
|
|
| 352 |
- **`tgt_lang`:** Target Language ((ISO 639-3 code)
|
| 353 |
- **`tgt_text`:** Target Language Text
|
| 354 |
|
|
|
|
| 355 |
|
| 356 |
## Usage
|
| 357 |
|
|
@@ -381,6 +383,8 @@ dataset = load_dataset("ai4bharat/Pralekha", data_dir="<subset>/eng_<lang>")
|
|
| 381 |
# <subset> = train, dev & test ; <lang> = ben, guj, hin, kan, mal, mar, ori, pan, tam, tel, urd.
|
| 382 |
```
|
| 383 |
|
|
|
|
|
|
|
| 384 |
## Data Size Statistics
|
| 385 |
|
| 386 |
| Split | Number of Documents | Size (bytes) |
|
|
|
|
| 335 |
|
| 336 |
For Machine Translation (MT) tasks, we first randomly sample 1,000 documents per English-Indic language pair, ensuring a good distribution of varying document lengths. After excluding these sampled documents, we use the remaining documents for training document-level machine translation models.
|
| 337 |
|
| 338 |
+
---
|
| 339 |
|
| 340 |
## Data Fields
|
| 341 |
|
|
|
|
| 353 |
- **`tgt_lang`:** Target Language ((ISO 639-3 code)
|
| 354 |
- **`tgt_text`:** Target Language Text
|
| 355 |
|
| 356 |
+
---
|
| 357 |
|
| 358 |
## Usage
|
| 359 |
|
|
|
|
| 383 |
# <subset> = train, dev & test ; <lang> = ben, guj, hin, kan, mal, mar, ori, pan, tam, tel, urd.
|
| 384 |
```
|
| 385 |
|
| 386 |
+
---
|
| 387 |
+
|
| 388 |
## Data Size Statistics
|
| 389 |
|
| 390 |
| Split | Number of Documents | Size (bytes) |
|