Whisper-large-v3-turbo-tr-finetuned

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the commonvoice_17_tr_fixed dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1598
  • Wer: 13.1893
  • Cer: 2.8903

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 248
  • training_steps: 1650

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.1376 0.2655 110 0.1511 13.7193 2.9344
0.1419 0.5311 220 0.1723 15.5702 3.6979
0.1439 0.7966 330 0.1752 15.6375 3.6395
0.1235 1.0604 440 0.1764 14.8004 3.2088
0.0863 1.3259 550 0.1701 14.8572 3.2624
0.0824 1.5914 660 0.1651 14.6595 3.3526
0.0852 1.8570 770 0.1664 14.5354 3.1855
0.042 2.1207 880 0.1743 14.9645 3.3255
0.0419 2.3862 990 0.1658 14.0853 3.1087
0.0475 2.6518 1100 0.1571 13.5216 2.8987
0.0425 2.9173 1210 0.1578 13.3429 2.8445
0.0247 3.1811 1320 0.1648 13.1220 2.8129
0.0224 3.4466 1430 0.1619 13.1325 2.9317
0.0223 3.7121 1540 0.1603 13.0737 2.9002
0.0183 3.9777 1650 0.1598 13.1893 2.8903

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.1
Downloads last month
5
Safetensors
Model size
0.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for egemenakdeniz/whisper-large-v3-turbo-tr-finetuned

Finetuned
(488)
this model

Dataset used to train egemenakdeniz/whisper-large-v3-turbo-tr-finetuned

Evaluation results