Datasets:
Project Euphonia is a public initiative led by Google that aims to improve Automatic Speech Recognition (ASR) for individuals with atypical speech. To date, most of Project Euphonia’s work has focused on English, resulting in outcomes such as the Android application Project Relate, which generates personalized speech recognition models in English. In recent years, the project has expanded its data collection efforts to additional languages, including French, Spanish, Japanese, and Hindi.
The Vaani Atypical Speech Corpus is a collaboration between Project Vaani and Project Euphonia to collect atypical speech data in Indic languages. This partnership leverages Vaani’s extensive expertise and nationwide reach in India alongside Project Euphonia’s experience in gathering speech data from historically underserved communities. The dataset is intended to support research toward improving ASR systems for such speakers, with potential applications in voice dictation, real-time communication and accurate closed captioning.
In the data collection process, once we identify the targeted speakers, as we followed in Vaani, we will present images and ask participants to describe or speak about them. Each participant will provide recordings for 20 images, with each recording lasting approximately 20–40 seconds. The data is collected with the assistance of Karya and due to the challenges in understanding atypical speech, audio segments are transcribed by individuals familiar with the participant’s speech patterns (e.g., family members or educators), to ensure they capture the speaker’s intended meaning as fully as possible, rather than a strict verbatim wording. While the nature of the disability of each speaker is not collected, the speech disabilities represented in this dataset include learning disabilities (mild–high), autism, cerebral palsy, Down syndrome, speech and hearing impairments, and intellectual and developmental disabilities.
Collected data undergoes rigorous automated and manual validation by language experts to ensure high-quality standards. Although not specialists in atypical speech, reviewers assess every audio segment for technical integrity while verifying that the speaking style is natural and voluntary. Simultaneously, they ensure content is safe, private, and relevant to the accompanying visual context.
The dataset is currently intended for research use only.
Dataset Summary
| Language | Segments | Speakers | Total Duration | Mean Duration |
|---|---|---|---|---|
| Hindi | 688 | 25 | 296.6 min | 25.5 s |
| Marathi | 290 | 8 | 194.5 min | 39.7 s |
| Telugu | 189 | 7 | 109.1 min | 34.6 s |
| Total | 1,167 | 40 | 600.2 min | — |
Subsets / Configurations
| Config | Description |
|---|---|
Hindi |
Hindi segments only |
Marathi |
Marathi segments only |
Telugu |
Telugu segments only |
all |
All languages combined (default) |
Prompt media (images and videos used to elicit speech) are stored in
data/image/(1,093 images) anddata/video/(84 videos). They are not audio configs — filter by theprompt_file_typemetadata field instead.
Usage
from datasets import load_dataset
from huggingface_hub import hf_hub_download
from PIL import Image
import requests
REPO = "ARTPARK-IISc/Vaani-Atypical-Speech-Corpus"
access_token="Your access token"
# Load a specific language
ds_hindi = load_dataset(REPO, "Hindi",token=access_token)
ds_marathi = load_dataset(REPO, "Marathi",token=access_token)
ds_telugu = load_dataset(REPO, "Telugu",token=access_token)
# Load all languages (default)
ds = load_dataset(REPO,token=access_token)
# Filter by prompt type
image_segs = ds["train"].filter(lambda x: x["prompt_file_type"] == "image")
video_segs = ds["train"].filter(lambda x: x["prompt_file_type"] == "video")
Associate prompt image / video with each audio segment
Each row has a prompt_file_name field (e.g. Vaani-Euphonia-Pilot-Image-42.jpg)
that points to the corresponding prompt file stored in data/image/ or data/video/.
from huggingface_hub import hf_hub_download
from PIL import Image
REPO = "ARTPARK-IISc/Vaani-Euphonia"
def get_prompt_file(row):
"""Download and return the prompt image or video path for a given row."""
folder = "data/image" if row["prompt_file_type"] == "image" else "data/video"
local_path = hf_hub_download(
repo_id=REPO,
repo_type="dataset",
filename=f"{folder}/{row['prompt_file_name']}",
)
return local_path
# Example: iterate and pair audio with its prompt image
for row in ds["train"].select(range(5)):
audio = row["audio"] # {"array": ..., "sampling_rate": 16000, "path": ...}
prompt = get_prompt_file(row) # local path to image/video file
if row["prompt_file_type"] == "image":
img = Image.open(prompt)
img.show()
print(f"Speaker : {row['speaker_id']}")
print(f"Language: {row['language']}")
print(f"Prompt : {row['prompt_file_name']} ({row['prompt_file_type']})")
print(f"Text : {row['transcription']}")
print()
Metadata Fields
| Field | Description |
|---|---|
audio |
Audio object (WAV, 16 kHz mono) |
speaker_id |
Unique contributor identifier |
language |
Hindi / Marathi / Telugu |
prompt_file_type |
Prompt media type: image or video |
prompt_file_name |
Prompt file used to elicit the speech |
transcription |
Intended transcription |
difficulty_level |
Low / Medium / High (from QC understandability rating) |
verbatim_transcription |
Word-for-word record by a non-specialist transcriber |
duration_s |
Audio duration in seconds |
- Downloads last month
- 13