--- library_name: transformers license: afl-3.0 base_model: masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0 tags: - named-entity-recognition - lumasaba - african-language - pii-detection - token-classification - generated_from_trainer datasets: - Beijuka/Multilingual_PII_NER_dataset metrics: - precision - recall - f1 - accuracy model-index: - name: multilingual-masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0-lumasaba-ner-v1 results: - task: name: Token Classification type: token-classification dataset: name: Beijuka/Multilingual_PII_NER_dataset type: Beijuka/Multilingual_PII_NER_dataset args: 'split: train+validation+test' metrics: - name: Precision type: precision value: 0.9702892885066459 - name: Recall type: recall value: 0.9487767584097859 - name: F1 type: f1 value: 0.9594124468496328 - name: Accuracy type: accuracy value: 0.9525409491810164 --- # multilingual-masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0-lumasaba-ner-v1 This model is a fine-tuned version of [masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0](https://huggingface.co/masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0) on the Beijuka/Multilingual_PII_NER_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.3834 - Precision: 0.9703 - Recall: 0.9488 - F1: 0.9594 - Accuracy: 0.9525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.1185 | 1.0 | 796 | 0.5047 | 0.8681 | 0.8810 | 0.8745 | 0.8735 | | 0.3868 | 2.0 | 1592 | 0.4627 | 0.9012 | 0.9146 | 0.9079 | 0.9108 | | 0.2335 | 3.0 | 2388 | 0.4419 | 0.9115 | 0.9272 | 0.9193 | 0.9198 | | 0.1462 | 4.0 | 3184 | 0.3402 | 0.9499 | 0.9507 | 0.9503 | 0.9520 | | 0.1072 | 5.0 | 3980 | 0.2399 | 0.9560 | 0.9538 | 0.9549 | 0.9563 | | 0.0916 | 6.0 | 4776 | 0.3072 | 0.9548 | 0.9593 | 0.9570 | 0.9588 | | 0.0432 | 7.0 | 5572 | 0.3124 | 0.9573 | 0.9663 | 0.9618 | 0.9605 | | 0.0383 | 8.0 | 6368 | 0.3386 | 0.9669 | 0.9608 | 0.9639 | 0.9575 | | 0.0502 | 9.0 | 7164 | 0.4429 | 0.9644 | 0.9554 | 0.9599 | 0.9550 | | 0.0349 | 10.0 | 7960 | 0.4191 | 0.9605 | 0.9522 | 0.9564 | 0.9481 | | 0.039 | 11.0 | 8756 | 0.4815 | 0.9558 | 0.9648 | 0.9602 | 0.9537 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4