| --- |
| license: cc |
| language: |
| - af |
| - ar |
| - ckb |
| - cs |
| - da |
| - de |
| - el |
| - en |
| - es |
| - fi |
| - fr |
| - gn |
| - he |
| - hi |
| - hu |
| - it |
| - ja |
| - ka |
| - kab |
| - ko |
| - lv |
| - nl |
| - quy |
| - ro |
| - sk |
| - sl |
| - sq |
| - sr |
| - th |
| - tr |
| - uk |
| - vi |
| - yue |
| task_categories: |
| - automatic-speech-recognition |
| pretty_name: Common Voice Corpus 15.0 |
| size_categories: |
| - 100B<n<1T |
| tags: |
| - mozilla |
| - foundation |
| --- |
| # Dataset Card for Common Voice Corpus 15.0 |
|
|
| <!-- Provide a quick summary of the dataset. --> |
|
|
| This dataset is an unofficial converted version of the Mozilla Common Voice Corpus 15. It currently contains the following languages: Arabic, French, Georgian, German, Hebrew, Italian, Portuguese, and Spanish, among others. Additional languages are being converted and will be uploaded in the next few days. |
|
|
|
|
| ## How to use |
| The datasets library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function. |
| |
| For example, to download the Portuguese config, simply specify the corresponding language config name (i.e., "pt" for Portuguese): |
| ``` |
| from datasets import load_dataset |
|
|
| cv_15 = load_dataset("fsicoli/common_voice_15_0", "pt", split="train") |
| ``` |
| Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. |
|
|
| ``` |
| from datasets import load_dataset |
| |
| cv_15 = load_dataset("fsicoli/common_voice_15_0", "pt", split="train", streaming=True) |
| |
| print(next(iter(cv_15))) |
| ``` |
|
|
| Bonus: create a PyTorch dataloader directly with your own datasets (local/streamed). |
|
|
| ### Local |
| ``` |
| from datasets import load_dataset |
| from torch.utils.data.sampler import BatchSampler, RandomSampler |
| |
| cv_15 = load_dataset("fsicoli/common_voice_15_0", "pt", split="train") |
| batch_sampler = BatchSampler(RandomSampler(cv_15), batch_size=32, drop_last=False) |
| dataloader = DataLoader(cv_15, batch_sampler=batch_sampler) |
| ``` |
|
|
| ### Streaming |
| ``` |
| from datasets import load_dataset |
| from torch.utils.data import DataLoader |
| |
| cv_15 = load_dataset("fsicoli/common_voice_15_0", "pt", split="train") |
| dataloader = DataLoader(cv_15, batch_size=32) |
| ``` |
|
|
| To find out more about loading and preparing audio datasets, head over to hf.co/blog/audio-datasets. |
|
|
|
|
|
|
| ### Dataset Structure |
| Data Instances |
| A typical data point comprises the path to the audio file and its sentence. Additional fields include accent, age, client_id, up_votes, down_votes, gender, locale and segment. |
| |
| ### Licensing Information |
| Public Domain, CC-0 |
| |
| |
| ### Citation Information |
| ``` |
| @inproceedings{commonvoice:2020, |
| author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, |
| title = {Common Voice: A Massively-Multilingual Speech Corpus}, |
| booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, |
| pages = {4211--4215}, |
| year = 2020 |
| } |
| ``` |