nb-asr-north-saami-parakeet-v6-tokenremap-v4

Fine-tuned NeMo Parakeet TDT model for North Saami ASR.

Run Info

Validation Metrics During Training

step val_loss val_wer
500 1.367028 0.574701
1000 0.969782 0.485598
1501 0.816585 0.422668
2001 0.785813 0.392611
2501 0.767730 0.367877
3002 0.806660 0.358798
3502 0.858681 0.363181
4002 0.920065 0.353162
4503 0.893287 0.350344
5003 0.984134 0.365999
5504 1.016652 0.353788
6004 1.013164 0.343770
6504 1.114988 0.337195
7005 1.130610 0.343143
7505 1.099598 0.334690
8005 1.227009 0.343143
8506 1.242283 0.342830
9006 1.211001 0.331872
9507 1.288550 0.327176
10007 1.256491 0.332498
10507 1.268182 0.331559
11008 1.275226 0.327489
11508 1.288128 0.332185
12008 1.301940 0.327176
12509 1.305894 0.322480
13009 1.322677 0.321540
  • Best during training: step 13009, val_loss 1.322677, val_wer 0.321540
  • Final during training: step 13009, val_loss 1.322677, val_wer 0.321540

Final Evaluation Metrics

split loss wer
validation 1.312412 0.322793
test 2.393497 0.482566

Test WER Comparison

model test_wer_raw test_wer_normalized
Parakeet (this repo) 0.482566 0.482566
NbAiLab/whisper-large-sme 0.597368 0.597368
  • Delta (Whisper - Parakeet): raw 0.114803, normalized 0.114803
  • Normalization for test_wer_normalized: drop_if_contains=['_', ':'], remove_chars=['*', '[', ']', '"', '(', ')']

Notes

  • Validation/test metrics above are computed post-training from the exported .nemo model.
  • Training validation table is parsed from logged Validation metrics logged to wandb entries.
  • Whisper comparison uses scripts/compare_test_wer_models.py on the same test manifest.
Downloads last month
192
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for NbAiLab/nb-asr-north-saami-parakeet-v6-tokenremap-v4

Finetuned
(33)
this model