andrewbawitlung's picture
Update README.md
bd9abee
|
raw
history blame
3.06 kB
metadata
license: apache-2.0
language:
  - lus
base_model: facebook/wav2vec2-xls-r-1b
tags:
  - mizo
  - audio
  - automatic-speech-recognition
  - lus
datasets:
  - generator
metrics:
  - wer
model-index:
  - name: wav2vec2-xls-r-1b-mizo-lus-v25.3
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: generator
          type: generator
          config: default
          split: train
          args: default
        metrics:
          - name: Wer
            type: wer
            value: 0.1520826294705343

Mizo Automatic Speech Recognition

This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1267
  • Wer: 0.1435

Citation

BibTeX entry and citation info:

@article{10.1145/3746063,
author = {Bawitlung, Andrew and Dash, Sandeep Kumar and Pattanayak, Radha Mohan},
title = {Mizo Automatic Speech Recognition: Leveraging Wav2vec 2.0 and XLS-R for Enhanced Accuracy in Low-Resource Language Processing},
year = {2025},
issue_date = {July 2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {24},
number = {7},
issn = {2375-4699},
url = {https://doi.org/10.1145/3746063},
doi = {10.1145/3746063},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
month = jul,
articleno = {72},
numpages = {15},
}

Training and evaluation data

MiZonal v1.0

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 50
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 28
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
2.2954 2.18 300 0.3737 0.4528
0.6507 4.35 600 0.1903 0.2866
0.492 6.53 900 0.1740 0.2419
0.4302 8.7 1200 0.1503 0.2189
0.3512 10.88 1500 0.1344 0.1884
0.2963 13.06 1800 0.1264 0.2071
0.2536 15.23 2100 0.1250 0.1868
0.2075 17.41 2400 0.1217 0.1599
0.1775 19.58 2700 0.1121 0.1602
0.151 21.76 3000 0.1204 0.1601
0.1253 23.93 3300 0.1211 0.1435
0.1073 26.11 3600 0.1267 0.1521

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1