--- library_name: transformers language: - en license: apache-2.0 base_model: google/bert_uncased_L-2_H-128_A-2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert_uncased_L-2_H-128_A-2_qnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.8004759289767527 --- # bert_uncased_L-2_H-128_A-2_qnli This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4352 - Accuracy: 0.8005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5442 | 1.0 | 410 | 0.4932 | 0.7679 | | 0.4861 | 2.0 | 820 | 0.4810 | 0.7692 | | 0.4609 | 3.0 | 1230 | 0.4527 | 0.7882 | | 0.4422 | 4.0 | 1640 | 0.4639 | 0.7803 | | 0.4256 | 5.0 | 2050 | 0.4744 | 0.7770 | | 0.409 | 6.0 | 2460 | 0.4702 | 0.7809 | | 0.392 | 7.0 | 2870 | 0.4352 | 0.8005 | | 0.3772 | 8.0 | 3280 | 0.4429 | 0.7970 | | 0.3608 | 9.0 | 3690 | 0.4630 | 0.7875 | | 0.3459 | 10.0 | 4100 | 0.5137 | 0.7688 | | 0.3354 | 11.0 | 4510 | 0.4836 | 0.7880 | | 0.3213 | 12.0 | 4920 | 0.4981 | 0.7842 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3