| --- |
| language: |
| - en |
| pipeline_tag: text-generation |
| tags: |
| - text-generation-inference |
| - instruct |
| - conversational |
| - roleplay |
| license: cc-by-4.0 |
| --- |
| |
| <h1 style="text-align: center">Erosumika-7B-v3-0.2</h1> |
| <h2 style="text-align: center">~Mistral 0.2 Edition~</h1> |
|
|
| <div style="display: flex; justify-content: center;"> |
| <img src="https://cdn-uploads.huggingface.co/production/uploads/6512681f4151fb1fa719e033/8YBKfcegQZliRlQNm0oir.gif" alt="Header GIF"> |
| </div> |
| |
|
|
| 6.0bpw quant of Erosumika 7b 0.2 v3. Original Link : (https://huggingface.co/localfultonextractor/Erosumika-7B-v3-0.2) |
|
|
| Recommended for most people. Balance of Preformance and Quality |
|
|
| ## Model Details |
| The Mistral 0.2 version of Erosumika-7B-v3, a DARE TIES merge between Nitral's [Kunocchini-7b](https://huggingface.co/Nitral-AI/Kunocchini-7b), Endevor's [InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and my [FlatErosAlpha](https://huggingface.co/localfultonextractor/FlatErosAlpha), a flattened(in order to keep the vocab size 32000) version of tavtav's [eros-7B-ALPHA](https://huggingface.co/tavtav/eros-7B-ALPHA). Alpaca and ChatML work best. Slightly smarter and better prompt comprehension than Mistral 0.1 Erosumika-7B-v3. 32k context should work. |
|
|
| [GGUF quants](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-0.2-GGUF) |
|
|
|
|
| ## Limitations and biases |
| The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope. |
| It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading. |
|
|
|
|
| ```yaml |
| merge_method: task_arithmetic |
| base_model: alpindale/Mistral-7B-v0.2-hf |
| models: |
| - model: localfultonextractor/Erosumika-7B-v3 |
| parameters: |
| weight: 1.0 |
| dtype: float16 |
| ``` |