Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/mrm8488/xlm-multi-finetuned-xquadv1/README.md
README.md
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: multilingual
|
| 3 |
+
thumbnail:
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# [XLM](https://github.com/facebookresearch/XLM/) (multilingual version) fine-tuned for multilingual Q&A
|
| 7 |
+
|
| 8 |
+
Released from `Facebook` together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau and fine-tuned on [XQuAD](https://github.com/deepmind/xquad) for multilingual (`11 different languages`) **Q&A** downstream task.
|
| 9 |
+
|
| 10 |
+
## Details of the language model('xlm-mlm-100-1280')
|
| 11 |
+
|
| 12 |
+
[Language model](https://github.com/facebookresearch/XLM/#ii-cross-lingual-language-model-pretraining-xlm)
|
| 13 |
+
|
| 14 |
+
| Languages
|
| 15 |
+
| --------- |
|
| 16 |
+
| 100 |
|
| 17 |
+
|
| 18 |
+
It includes the following languages:
|
| 19 |
+
|
| 20 |
+
<details>
|
| 21 |
+
en-es-fr-de-zh-ru-pt-it-ar-ja-id-tr-nl-pl-simple-fa-vi-sv-ko-he-ro-no-hi-uk-cs-fi-hu-th-da-ca-el-bg-sr-ms-bn-hr-sl-zh_yue-az-sk-eo-ta-sh-lt-et-ml-la-bs-sq-arz-af-ka-mr-eu-tl-ang-gl-nn-ur-kk-be-hy-te-lv-mk-zh_classical-als-is-wuu-my-sco-mn-ceb-ast-cy-kn-br-an-gu-bar-uz-lb-ne-si-war-jv-ga-zh_min_nan-oc-ku-sw-nds-ckb-ia-yi-fy-scn-gan-tt-am
|
| 22 |
+
</details>
|
| 23 |
+
|
| 24 |
+
## Details of the downstream task (multilingual Q&A) - Dataset
|
| 25 |
+
|
| 26 |
+
Deepmind [XQuAD](https://github.com/deepmind/xquad)
|
| 27 |
+
|
| 28 |
+
Languages covered:
|
| 29 |
+
|
| 30 |
+
- Arabic: `ar`
|
| 31 |
+
- German: `de`
|
| 32 |
+
- Greek: `el`
|
| 33 |
+
- English: `en`
|
| 34 |
+
- Spanish: `es`
|
| 35 |
+
- Hindi: `hi`
|
| 36 |
+
- Russian: `ru`
|
| 37 |
+
- Thai: `th`
|
| 38 |
+
- Turkish: `tr`
|
| 39 |
+
- Vietnamese: `vi`
|
| 40 |
+
- Chinese: `zh`
|
| 41 |
+
|
| 42 |
+
As the dataset is based on SQuAD v1.1, there are no unanswerable questions in the data. We chose this
|
| 43 |
+
setting so that models can focus on cross-lingual transfer.
|
| 44 |
+
|
| 45 |
+
We show the average number of tokens per paragraph, question, and answer for each language in the
|
| 46 |
+
table below. The statistics were obtained using [Jieba](https://github.com/fxsjy/jieba) for Chinese
|
| 47 |
+
and the [Moses tokenizer](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl)
|
| 48 |
+
for the other languages.
|
| 49 |
+
|
| 50 |
+
| | en | es | de | el | ru | tr | ar | vi | th | zh | hi |
|
| 51 |
+
| --------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
| 52 |
+
| Paragraph | 142.4 | 160.7 | 139.5 | 149.6 | 133.9 | 126.5 | 128.2 | 191.2 | 158.7 | 147.6 | 232.4 |
|
| 53 |
+
| Question | 11.5 | 13.4 | 11.0 | 11.7 | 10.0 | 9.8 | 10.7 | 14.8 | 11.5 | 10.5 | 18.7 |
|
| 54 |
+
| Answer | 3.1 | 3.6 | 3.0 | 3.3 | 3.1 | 3.1 | 3.1 | 4.5 | 4.1 | 3.5 | 5.6 |
|
| 55 |
+
|
| 56 |
+
Citation:
|
| 57 |
+
|
| 58 |
+
<details>
|
| 59 |
+
|
| 60 |
+
```bibtex
|
| 61 |
+
@article{Artetxe:etal:2019,
|
| 62 |
+
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
|
| 63 |
+
title = {On the cross-lingual transferability of monolingual representations},
|
| 64 |
+
journal = {CoRR},
|
| 65 |
+
volume = {abs/1910.11856},
|
| 66 |
+
year = {2019},
|
| 67 |
+
archivePrefix = {arXiv},
|
| 68 |
+
eprint = {1910.11856}
|
| 69 |
+
}
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
</details>
|
| 73 |
+
|
| 74 |
+
As XQuAD is just an evaluation dataset, I used Data augmentation techniques (scraping, neural machine translation, etc) to obtain more samples and split the dataset in order to have a train and test set. The test set was created in a way that contains the same number of samples for each language. Finally, I got:
|
| 75 |
+
|
| 76 |
+
| Dataset | # samples |
|
| 77 |
+
| ----------- | --------- |
|
| 78 |
+
| XQUAD train | 50 K |
|
| 79 |
+
| XQUAD test | 8 K |
|
| 80 |
+
|
| 81 |
+
## Model training
|
| 82 |
+
|
| 83 |
+
The model was trained on a Tesla P100 GPU and 25GB of RAM.
|
| 84 |
+
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/distillation/run_squad_w_distillation.py)
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
## Model in action
|
| 88 |
+
|
| 89 |
+
Fast usage with **pipelines**:
|
| 90 |
+
|
| 91 |
+
```python
|
| 92 |
+
from transformers import pipeline
|
| 93 |
+
|
| 94 |
+
qa_pipeline = pipeline(
|
| 95 |
+
"question-answering",
|
| 96 |
+
model="mrm8488/xlm-multi-finetuned-xquadv1",
|
| 97 |
+
tokenizer="mrm8488/xlm-multi-finetuned-xquadv1"
|
| 98 |
+
)
|
| 99 |
+
|
| 100 |
+
# English
|
| 101 |
+
qa_pipeline({
|
| 102 |
+
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
|
| 103 |
+
'question': "Who has been working hard for hugginface/transformers lately?"
|
| 104 |
+
})
|
| 105 |
+
|
| 106 |
+
#Output: {'answer': 'Manuel', 'end': 6, 'score': 8.531880747878265e-05, 'start': 0}
|
| 107 |
+
|
| 108 |
+
# Russian
|
| 109 |
+
qa_pipeline({
|
| 110 |
+
'context': "Мануэль Ромеро в последнее время почти не работал в репозитории hugginface / transformers",
|
| 111 |
+
'question': "Кто в последнее время усердно работал над обнимашками / трансформерами?"
|
| 112 |
+
|
| 113 |
+
})
|
| 114 |
+
|
| 115 |
+
#Output: {'answer': 'работал в репозитории hugginface /','end': 76, 'score': 0.00012340750456964894, 'start': 42}
|
| 116 |
+
```
|
| 117 |
+
Try it on a Colab (*Do not forget to change the model and tokenizer path in the Colab if necessary*):
|
| 118 |
+
|
| 119 |
+
<a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Try_mrm8488_xquad_finetuned_uncased_model.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a>
|
| 120 |
+
|
| 121 |
+
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
|
| 122 |
+
|
| 123 |
+
> Made with <span style="color: #e25555;">♥</span> in Spain
|