HamdanXI/wav2vec2-base-myst-new

automatic speech recognitiontransformerstransformerssafetensorswav2vec2automatic-speech-recognitiongenerated_from_trainerbase_model:facebook/wav2vec2-baseapache-2.0
408.6K

wav2vec2-base-myst-new

This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4262
  • Wer: 0.1249
  • Cer: 0.0583

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.06
  • num_epochs: 20

Training results

Training LossEpochStepValidation LossWerCer
0.52551.161520000.40680.21820.0884
0.3582.322940000.34760.16570.0713
0.30553.484460000.30390.15640.0663
0.25714.645980000.29450.14720.0641
0.23735.8073100000.30800.14570.0635
0.22776.9688120000.30350.13700.0619
0.1848.1301140000.32640.13360.0603
0.1559.2916160000.33220.13480.0614
0.145910.4530180000.34640.13400.0617
0.139611.6145200000.33060.13300.0610
0.128812.7760220000.35630.12940.0595
0.112313.9374240000.36050.12940.0598
0.106115.0987260000.38960.12870.0595
0.093816.2602280000.39040.12740.0591
0.085117.4217300000.41890.12520.0585
0.085818.5831320000.41450.12600.0587
0.07619.7446340000.42620.12490.0583

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.1.1
  • Tokenizers 0.22.1
DEPLOY IN 60 SECONDS

Run wav2vec2-base-myst-new on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.