metadata
library_name: transformers
language:
- en
license: apache-2.0
base_model: google/bert_uncased_L-2_H-128_A-2
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert_uncased_L-2_H-128_A-2_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.7112489829129374
bert_uncased_L-2_H-128_A-2_mnli
This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the GLUE MNLI dataset. It achieves the following results on the evaluation set:
- Loss: 0.6901
- Accuracy: 0.7112
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.9155 | 1.0 | 1534 | 0.8197 | 0.6342 |
| 0.8189 | 2.0 | 3068 | 0.7689 | 0.6626 |
| 0.7747 | 3.0 | 4602 | 0.7417 | 0.6760 |
| 0.7449 | 4.0 | 6136 | 0.7285 | 0.6852 |
| 0.7198 | 5.0 | 7670 | 0.7111 | 0.6934 |
| 0.6996 | 6.0 | 9204 | 0.7118 | 0.6977 |
| 0.6812 | 7.0 | 10738 | 0.7005 | 0.7030 |
| 0.6649 | 8.0 | 12272 | 0.6981 | 0.7043 |
| 0.6491 | 9.0 | 13806 | 0.7057 | 0.7036 |
| 0.6358 | 10.0 | 15340 | 0.6983 | 0.7077 |
| 0.6224 | 11.0 | 16874 | 0.6966 | 0.7064 |
| 0.6109 | 12.0 | 18408 | 0.7001 | 0.7145 |
| 0.5994 | 13.0 | 19942 | 0.7014 | 0.7113 |
| 0.5872 | 14.0 | 21476 | 0.7061 | 0.7084 |
| 0.5779 | 15.0 | 23010 | 0.7054 | 0.7168 |
| 0.5681 | 16.0 | 24544 | 0.7059 | 0.7147 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3