wav2vec2-lora-l2arctic-14-11

This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 6.4729
  • Wer: 0.9774

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
15.2102 0.6798 500 7.4468 1.0
15.1765 1.3589 1000 7.4114 1.0
15.0308 2.0381 1500 7.3957 1.0
15.0279 2.7179 2000 7.3889 1.0
15.0441 3.3970 2500 7.4176 1.0
14.9453 4.0761 3000 7.3914 1.0
14.9498 4.7559 3500 7.2120 1.0
14.5327 5.4351 4000 7.1238 1.0
14.2946 6.1142 4500 6.9881 1.0000
14.1171 6.7940 5000 6.7281 0.9880
13.7084 7.4731 5500 6.4729 0.9774
13.3492 8.1523 6000 6.2711 0.9827
13.2188 8.8321 6500 6.1801 0.9794
13.1921 9.5112 7000 6.1092 0.9797

Framework versions

  • PEFT 0.18.0
  • Transformers 4.57.1
  • Pytorch 2.9.1+cu128
  • Datasets 4.4.1
  • Tokenizers 0.22.1
Downloads last month
2
Safetensors
Model size
94.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for maiduchuy321/wav2vec2-lora-l2arctic-14-11

Adapter
(1)
this model