mradermacher commited on
Commit
80e5f8f
·
verified ·
1 Parent(s): b5a7d00

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
2
  base_model: Aratako/Mixtral-8x7B-Instruct-v0.1-upscaled
3
  language:
 
 
 
 
4
  - en
5
  library_name: transformers
6
  license: apache-2.0
@@ -32,7 +36,7 @@ more details, including on how to concatenate multi-part files.
32
  | Link | Type | Size/GB | Notes |
33
  |:-----|:-----|--------:|:------|
34
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ1_S.gguf) | i1-IQ1_S | 16.8 | for the desperate |
35
- | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ1_M.gguf) | i1-IQ1_M | 18.7 | for the desperate |
36
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 22.0 | |
37
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ2_S.gguf) | i1-IQ2_S | 24.3 | |
38
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ2_XS.gguf) | i1-IQ2_XS | 24.4 | |
@@ -53,7 +57,6 @@ more details, including on how to concatenate multi-part files.
53
  | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 58.1 | |
54
  | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 67.1 | practically like static Q6_K |
55
 
56
-
57
  Here is a handy graph by ikawrakow comparing some lower-quality quant
58
  types (lower is better):
59
 
 
1
  ---
2
  base_model: Aratako/Mixtral-8x7B-Instruct-v0.1-upscaled
3
  language:
4
+ - fr
5
+ - it
6
+ - de
7
+ - es
8
  - en
9
  library_name: transformers
10
  license: apache-2.0
 
36
  | Link | Type | Size/GB | Notes |
37
  |:-----|:-----|--------:|:------|
38
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ1_S.gguf) | i1-IQ1_S | 16.8 | for the desperate |
39
+ | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ1_M.gguf) | i1-IQ1_M | 18.7 | mostly desperate |
40
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 22.0 | |
41
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ2_S.gguf) | i1-IQ2_S | 24.3 | |
42
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-IQ2_XS.gguf) | i1-IQ2_XS | 24.4 | |
 
57
  | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 58.1 | |
58
  | [PART 1](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-upscaled-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-upscaled.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 67.1 | practically like static Q6_K |
59
 
 
60
  Here is a handy graph by ikawrakow comparing some lower-quality quant
61
  types (lower is better):
62