| --- |
| license: apache-2.0 |
| pipeline_tag: text-generation |
| tags: |
| - finetuned |
| inference: |
| parameters: |
| temperature: 0.7 |
| --- |
| |
| # Model Card for Mistral-7B-Instruct-v0.1 |
|
|
| The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. |
|
|
| For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/) |
|
|
| ## Instruction format |
|
|
| In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. |
|
|
| E.g. |
| ``` |
| text = "<s>[INST] What is your favourite condiment? [/INST]" |
| "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " |
| "[INST] Do you have mayonnaise recipes? [/INST]" |
| ``` |
|
|
| This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: |
|
|
| ```python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| |
| device = "cuda" # the device to load the model onto |
| |
| model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") |
| tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") |
| |
| messages = [ |
| {"role": "user", "content": "What is your favourite condiment?"}, |
| {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, |
| {"role": "user", "content": "Do you have mayonnaise recipes?"} |
| ] |
| |
| encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
| |
| model_inputs = encodeds.to(device) |
| model.to(device) |
| |
| generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) |
| decoded = tokenizer.batch_decode(generated_ids) |
| print(decoded[0]) |
| ``` |
|
|
| ## Model Architecture |
| This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: |
| - Grouped-Query Attention |
| - Sliding-Window Attention |
| - Byte-fallback BPE tokenizer |
|
|
| ## Troubleshooting |
| - If you see the following error: |
| ``` |
| Traceback (most recent call last): |
| File "", line 1, in |
| File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained |
| config, kwargs = AutoConfig.from_pretrained( |
| File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained |
| config_class = CONFIG_MAPPING[config_dict["model_type"]] |
| File "/transformers/models/auto/configuration_auto.py", line 723, in getitem |
| raise KeyError(key) |
| KeyError: 'mistral' |
| ``` |
|
|
| Installing transformers from source should solve the issue |
| pip install git+https://github.com/huggingface/transformers |
|
|
| This should not be required after transformers-v4.33.4. |
|
|
| ## Limitations |
|
|
| The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. |
| It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to |
| make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. |
|
|
| ## The Mistral AI Team |
|
|
| Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |