| --- |
| tags: |
| - generated_from_trainer |
| metrics: |
| - precision |
| - recall |
| - f1 |
| - accuracy |
| model-index: |
| - name: bert-base-parsbert-uncased-ontonotesv5 |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # bert-base-parsbert-uncased-ontonotesv5 |
|
|
| This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on the [ontonotes5-persian](https://huggingface.co/datasets/Amir13/ontonotes5-persian) dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.2169 |
| - Precision: 0.8145 |
| - Recall: 0.8287 |
| - F1: 0.8215 |
| - Accuracy: 0.9741 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 2e-05 |
| - train_batch_size: 32 |
| - eval_batch_size: 32 |
| - seed: 42 |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| - lr_scheduler_type: linear |
| - num_epochs: 15 |
|
|
| ### Training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |
| |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| |
| | 0.1029 | 1.0 | 2310 | 0.1151 | 0.8080 | 0.7559 | 0.7811 | 0.9691 | |
| | 0.059 | 2.0 | 4620 | 0.1098 | 0.7909 | 0.8068 | 0.7988 | 0.9719 | |
| | 0.0363 | 3.0 | 6930 | 0.1205 | 0.7981 | 0.8168 | 0.8074 | 0.9728 | |
| | 0.0202 | 4.0 | 9240 | 0.1406 | 0.8115 | 0.8046 | 0.8080 | 0.9726 | |
| | 0.0122 | 5.0 | 11550 | 0.1496 | 0.7847 | 0.8225 | 0.8031 | 0.9721 | |
| | 0.0105 | 6.0 | 13860 | 0.1633 | 0.7962 | 0.8188 | 0.8073 | 0.9724 | |
| | 0.0057 | 7.0 | 16170 | 0.1842 | 0.8071 | 0.8133 | 0.8102 | 0.9729 | |
| | 0.0041 | 8.0 | 18480 | 0.1913 | 0.8081 | 0.8093 | 0.8087 | 0.9727 | |
| | 0.003 | 9.0 | 20790 | 0.1935 | 0.8121 | 0.8130 | 0.8126 | 0.9732 | |
| | 0.002 | 10.0 | 23100 | 0.1992 | 0.8136 | 0.8214 | 0.8175 | 0.9734 | |
| | 0.002 | 11.0 | 25410 | 0.2037 | 0.8014 | 0.8280 | 0.8145 | 0.9735 | |
| | 0.0012 | 12.0 | 27720 | 0.2092 | 0.8133 | 0.8204 | 0.8168 | 0.9737 | |
| | 0.001 | 13.0 | 30030 | 0.2095 | 0.8125 | 0.8253 | 0.8188 | 0.9739 | |
| | 0.0006 | 14.0 | 32340 | 0.2143 | 0.8129 | 0.8272 | 0.8200 | 0.9740 | |
| | 0.0005 | 15.0 | 34650 | 0.2169 | 0.8145 | 0.8287 | 0.8215 | 0.9741 | |
|
|
|
|
| ### Framework versions |
|
|
| - Transformers 4.27.0.dev0 |
| - Pytorch 1.13.1+cu116 |
| - Datasets 2.8.0 |
| - Tokenizers 0.13.2 |
|
|
| ## Citation |
| If you used the datasets and models in this repository, please cite it. |
| ```bibtex |
| @misc{https://doi.org/10.48550/arxiv.2302.09611, |
| doi = {10.48550/ARXIV.2302.09611}, |
| url = {https://arxiv.org/abs/2302.09611}, |
| author = {Sartipi, Amir and Fatemi, Afsaneh}, |
| keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, |
| title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, |
| publisher = {arXiv}, |
| year = {2023}, |
| copyright = {arXiv.org perpetual, non-exclusive license} |
| } |
| ``` |