infinitejoy/wav2vec2-large-xls-r-300m-welsh

automatic speech recognitiontransformerscytransformerspytorchwav2vec2automatic-speech-recognitioncygenerated_from_trainerapache-2.0
350.2K

wav2vec2-large-xls-r-300m-welsh

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - CY dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2650
  • Wer: 0.2702

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 3000
  • num_epochs: 50.0
  • mixed_precision_training: Native AMP

Training results

Training LossEpochStepValidation LossWer
1.34548.230000.49260.5703
1.120216.3960000.35290.3944
1.005824.5990000.31430.3341
0.928732.79120000.28960.2980
0.884940.98150000.27270.2798
0.866549.18180000.26620.2696

Framework versions

  • Transformers 4.16.0.dev0
  • Pytorch 1.10.1+cu102
  • Datasets 1.18.3
  • Tokenizers 0.11.0
DEPLOY IN 60 SECONDS

Run wav2vec2-large-xls-r-300m-welsh on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.