--- license: mit dataset_info: features: - name: id dtype: string - name: audio dtype: audio: sampling_rate: 48000 - name: utterance dtype: string - name: landmarks dtype: string splits: - name: train num_bytes: 3491366369.548 num_examples: 115487 download_size: 2130707185 dataset_size: 3491366369.548 configs: - config_name: default data_files: - split: train path: data/train-* language: en tags: - audio - speech-synthesis - acoustic-landmarks - phonetics - text-to-speech pretty_name: "Pink Trombone English Phonetic & Landmark Dataset" --- # Dataset Card for **Pink Trombone English Phonetic & Landmark Dataset** **Repository:** `mcamara/all-words-in-english-with-pink-trombone` **Modality:** Audio + Time-aligned Events (landmarks) + Articulatory keyframes **Language:** English (IPA) **Sampling rate:** 48,000 Hz (mono) **Size:** 115,487 items (single split: `train`) **Indexing:** Alphabetical by `id` (orthographic word) --- ## 1) Summary A large-scale, *clean* synthetic speech dataset generated with the **Pink Trombone** articulatory synthesizer. Each example links a word → (i) audio, (ii) a **phoneme/keyframe script** used for synthesis, and (iii) **acoustic landmarks** extracted from the model’s internal state. Landmark types follow a Stevens-style inventory (e.g., stop closure/release, fricative onset/release, Vowel, Glide). **Primary use:** training and evaluating **acoustic landmark detection**. **Secondary uses:** phoneme recognition, articulatory–acoustic modeling, TTS/control experiments, data augmentation. --- ## 2) What’s in each example? - `id` *(string)*: the word (e.g., `"bashore"`). - `audio` *(datasets.Audio)*: mono WAV @ 48 kHz. - `utterance` *(string, JSON-formatted)*: Pink Trombone **keyframes** (phoneme tags, timing, control parameters). - `landmarks` *(string, JSON-formatted)*: **time-stamped events** (`type`, `time`, `name`). > Note: `utterance` and `landmarks` are stored as **JSON strings** for portability. Parse them on load. --- ## 3) Example instance (abridged) ```json { "id": "bashore", "audio": { "path": "bashore.wav", "array": "…", "sampling_rate": 48000 }, "utterance": "{\"name\":\"bashore\",\"keyframes\":[ {\"isSubPhoneme\":false,\"intensity\":1,\"name\":\"b(0)\", \"frontConstriction.diameter\":0.088, \"backConstriction.diameter\":5, \"tenseness\":0.691, \"loudness\":0.912, \"frontConstriction.index\":41.108, \"time\":0.10, \"frequency\":140, \"tractLength\":44}, {\"isSubPhoneme\":false,\"name\":\"b(0)]\",\"isHold\":true,\"time\":0.15}, {\"isSubPhoneme\":true, \"name\":\"b(1)\", \"time\":0.16}, {\"isSubPhoneme\":true, \"name\":\"b(1)]\",\"time\":0.17}, {\"isSubPhoneme\":false,\"name\":\"æ\",\"tongue.index\":14.007, \"tongue.diameter\":2.887, \"time\":0.27}, {\"isSubPhoneme\":false,\"name\":\"ʃ\",\"frontConstriction.index\":31.583, \"tongue.index\":38.116,\"tongue.diameter\":4.172, \"time\":0.37}, {\"isSubPhoneme\":false,\"name\":\"ɚ\",\"frontConstriction.index\":28.317, \"tongue.index\":8.941, \"tongue.diameter\":1.365, \"time\":0.522} ]}", "landmarks": "[{\"type\":\"Sc\",\"time\":0.10,\"name\":\"b(0)\"}, {\"type\":\"Sr\",\"time\":0.155,\"name\":\"transition\"}, {\"type\":\"V\", \"time\":0.27, \"name\":\"æ\"}, {\"type\":\"Fc\",\"time\":0.37, \"name\":\"ʃ\"}, {\"type\":\"Fr\",\"time\":0.422,\"name\":\"ʃ]}\"}, {\"type\":\"V\", \"time\":0.522,\"name\":\"ɚ\"}]" } ```` A smaller “toy” word: ```json { "id": "basic", "utterance": "{\"name\":\"basic\",\"keyframes\":[ {\"name\":\"b\", \"time\":0.10, \"isSubPhoneme\":false}, {\"name\":\"eɪ\",\"time\":0.22, \"isSubPhoneme\":false}, {\"name\":\"s\", \"time\":0.34, \"isSubPhoneme\":false}, {\"name\":\"ɪ\", \"time\":0.46, \"isSubPhoneme\":false}, {\"name\":\"k\", \"time\":0.58, \"isSubPhoneme\":false} ]}", "landmarks": "[{\"type\":\"Sc\",\"time\":0.10,\"name\":\"b\"}, {\"type\":\"Sr\",\"time\":0.16,\"name\":\"b\"}, {\"type\":\"V\", \"time\":0.22,\"name\":\"eɪ\"}]" } ``` --- ## 4) Landmark taxonomy Landmarks are **instantaneous events** (times in seconds from audio start): | Code | Meaning (intuition) | Typical trigger | | ---: | ---------------------------------------- | ---------------------------------------------- | | `Sc` | **Stop closure** | Oral constriction reaches closure | | `Sr` | **Stop release** (burst/VOT onset) | Closure releases / pressure burst | | `Fc` | **Fricative onset** | Narrow constriction → turbulence onset | | `Fr` | **Fricative release** | Turbulence ceases | | `V` | **Vowel event** (steady vocalic segment) | Stable vocalic target (F1–F2 region) | | `Nc` | **Nasal closure** | Closure of nasal cavity | | `Nr` | **Nasal release** | Release of nasal cavity | | `G` | **Glide** | Vocalic Narrow constriction | > The inventory is inspired by Stevens’ acoustic landmark theory. Exact emission is derived from Pink Trombone’s internal state machine and target transitions. --- ## 5) Utterance/keyframe controls (Pink Trombone) Common fields observed in `utterance.keyframes`: * **Timing and tags:** `name` (phoneme or sub-phoneme like `b(0)`, `b(1)`), `time` (s), `isSubPhoneme`, `isHold`, `isSilent`, `intensity`, `intensityMultiplier`. * **Articulators / tract:** * `tongue.index`, `tongue.diameter` * `frontConstriction.index`, `frontConstriction.diameter` * `backConstriction.diameter` * `tractLength` * **Source / prosody:** `tenseness` (voicing), `frequency` (F0, Hz), `loudness`. > These are *targets* at specific times; the synthesizer interpolates between them to generate continuous motion and audio. --- ## 6) Loading and parsing ```python from datasets import load_dataset import json ds = load_dataset("mcamara/all-words-in-english-with-pink-trombone", split="train") ex = ds[0] audio = ex["audio"] # datasets.Audio -> numpy array + sr utt = json.loads(ex["utterance"]) # dict, contains "name" + "keyframes" lms = json.loads(ex["landmarks"]) # list of {type, time, name} # Example: convert landmarks to a frame-level label vector (10 ms hop) import numpy as np sr = audio["sampling_rate"] hop = int(0.01 * sr) # 10 ms n_frames = int(np.ceil(len(audio["array"]) / hop)) frame_times = np.arange(n_frames) * (hop / sr) # For each frame, list all landmark types within ±15 ms tol = 0.015 per_frame_labels = [ [e["type"] for e in lms if abs(e["time"] - t) <= tol] for t in frame_times ] ``` --- ## 7) Recommended evaluation protocols (landmark detection) * **Event-matching tolerance:** ±10–20 ms around reference time. Report **precision / recall / F1** by class and macro-averaged. * **Class subset:** report both **full set** and **obstruent-only** (`Sc/Sr/Fc/Fr`) subsets. * **Metrics:** Average Precision (AP) per class optional; DET curves for `Sc/Sr`. **Baseline idea (sketch):** * Input: 80-bin log-mel spectrogram (25 ms window, 10 ms hop). * Model: small CRNN or 1D-TCN predicting event probabilities per frame (multi-label). * Decoding: local-peak picking + NMS with min separation (e.g., 30 ms) to emit events. --- ## 8) Suggested use-cases * **Acoustic Landmark Detection (primary).** * **ASR pretraining / weak supervision:** align synthetic landmarks with human corpora. * **Articulatory–acoustic modeling:** learn mappings from keyframes ↔ acoustics. * **TTS/control:** use keyframes as interpretable conditioning signals. * **Curricula / pedagogy:** visualize gesture → acoustics with perfect alignments. --- ## 9) Dataset structure and splits * **Split:** single `train` covering **115,487** English words (alphabetically keyed by `id`). * **One token per word** (canonical, isolated pronunciation). * To create eval sets, we recommend deterministic sampling by word hash (e.g., 90/5/5 train/dev/test) to maintain reproducibility and lexical balance. ```python import hashlib def bucket(word, K=1000): return int(hashlib.md5(word.encode()).hexdigest(), 16) % K # Example: split by bucket train_idx = [i for i, ex in enumerate(ds) if bucket(ex["id"]) < 900] # 90% dev_idx = [i for i, ex in enumerate(ds) if 900 <= bucket(ex["id"]) < 950]# 5% test_idx = [i for i, ex in enumerate(ds) if bucket(ex["id"]) >= 950] # 5% ``` --- ## 10) Data fields (schema) | Field | Type | Description | | ----------- | -------------- | ------------------------------------------- | | `id` | string | Orthographic word, unique key. | | `audio` | datasets.Audio | Mono 48 kHz waveform. | | `utterance` | string (JSON) | Synthesis plan (word name + keyframe list). | | `landmarks` | string (JSON) | Time-ordered acoustic events. | **`utterance` JSON** ```json { "name": "", "keyframes": [ { "name": "", "time": , "isSubPhoneme": , "isHold": , "isSilent": , "intensity": , "intensityMultiplier": , "tongue.index": , "tongue.diameter": , "frontConstriction.index": , "frontConstriction.diameter": , "backConstriction.diameter": , "tenseness": , "loudness": , "tractLength": , "frequency": } ] } ``` **`landmarks` JSON** ```json [ { "type": "Sc|Sr|Fc|Fr|V|Gc|Gr", "time": , "name": "" } ] ``` --- ## 11) Generation process (high-level) 1. **Canonical IPA** per word (dictionary-style pronunciation). 2. **Manual articulatory mapping** → Pink Trombone control targets for each phoneme; sub-phoneme tags for closures/releases where relevant. 3. **Synthesis** at 48 kHz (mono). 4. **Event extraction** from the model state to emit **landmarks** aligned to the audio and keyframe times. 5. Packaging into HF dataset: audio + JSON strings (`utterance`, `landmarks`) per item. --- ## 12) Quality checks * **Consistency:** each example has audio, a non-empty keyframe list, and at least one landmark. * **Alphabetical keys:** `id` sorted to simplify indexing and sharding. * **Spot-checks:** vowel targets near expected **F1–F2** regions; sibilant spectral centroid; stop bursts visible in spectrograms near `Sr`. --- ## 13) Considerations & limitations * **Synthetic single-voice** (Pink Trombone): no inter-speaker variability. * **Canonical forms** (isolated pronunciations): limited coarticulation diversity. * **Model-specific parameters:** Pink Trombone naming/ranges; not directly comparable to other articulatory models without mapping. Mitigations: augment with noise/reverberation; mix with human corpora; randomize tract parameters moderately for robustness studies. --- ## 14) How to visualize ```python # Quick plot of waveform + landmark stems (matplotlib) import matplotlib.pyplot as plt import numpy as np, json y = audio["array"]; sr = audio["sampling_rate"] t = np.arange(len(y))/sr lms = json.loads(ex["landmarks"]) plt.figure() plt.plot(t, y) for e in lms: plt.axvline(e["time"], linestyle="--", alpha=0.4) plt.title(f'{ex["id"]} – landmarks: {sorted(set([e["type"] for e in lms]))}') plt.xlabel("Time (s)"); plt.ylabel("Amplitude") plt.show() ``` --- ## 15) Ethics / intended use * Intended for **research and education** in speech/phonetics/ML. * Not human speech; **do not** use as-is for biometric or speaker ID tasks. * If mixing with human data, follow the human corpus license and ethics guidelines. --- ## 16) Citation Please cite both the landmark theory and this dataset: ``` @book{stevens1998acoustic, title={Acoustic Phonetics}, author={Stevens, Kenneth N.}, year={1998}, publisher={MIT Press} } @misc{pink_trombone_english_landmarks, author = {Cámara, Mateo}, title = {Pink Trombone English Phonetic \& Landmark Dataset}, year = {2025}, howpublished = {\url{https://huggingface.co/datasets/mcamara/all-words-in-english-with-pink-trombone}} } ```