nightmedia commited on
Commit
4623e99
·
verified ·
1 Parent(s): dc5aa91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -9
README.md CHANGED
@@ -1,25 +1,39 @@
1
  ---
2
  base_model:
3
- - DavidAU/gemma-3-12b-it-heretic-R8-it-vl-polaris-alpha-1000x
 
4
  tags:
5
  - text-generation-inference
6
  - transformers
 
 
 
 
 
 
7
  - gemma
8
  - mlx
9
  license: apache-2.0
10
  language:
11
  - en
12
  datasets:
13
- - TeichAI/polaris-alpha-1000x
14
- pipeline_tag: text-generation
15
  library_name: mlx
16
  ---
17
 
18
- # gemma-3-12b-it-heretic-R8-it-vl-glm-4.7-2000x-qx86-hi-mlx
19
 
20
- This model [gemma-3-12b-it-heretic-R8-it-vl-glm-4.7-2000x-qx86-hi-mlx](https://huggingface.co/gemma-3-12b-it-heretic-R8-it-vl-glm-4.7-2000x-qx86-hi-mlx) was
21
- converted to MLX format from [['DavidAU/gemma-3-12b-it-heretic-R8-it-vl-polaris-alpha-1000x']](https://huggingface.co/['DavidAU/gemma-3-12b-it-heretic-R8-it-vl-polaris-alpha-1000x'])
22
- using mlx-lm version **0.30.6**.
 
 
 
 
 
 
 
23
 
24
  ## Use with mlx
25
 
@@ -30,7 +44,7 @@ pip install mlx-lm
30
  ```python
31
  from mlx_lm import load, generate
32
 
33
- model, tokenizer = load("gemma-3-12b-it-heretic-R8-it-vl-glm-4.7-2000x-qx86-hi-mlx")
34
 
35
  prompt = "hello"
36
 
@@ -41,4 +55,4 @@ if tokenizer.chat_template is not None:
41
  )
42
 
43
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
44
- ```
 
1
  ---
2
  base_model:
3
+ - DreamFast/gemma-3-12b-it-heretic
4
+ - DavidAU/gemma-3-12b-it-vl-GLM-4.7-Flash-Heretic-Uncensored-Thinking
5
  tags:
6
  - text-generation-inference
7
  - transformers
8
+ - unsloth
9
+ - heretic
10
+ - abliterated
11
+ - uncensored
12
+ - mergekit
13
+ - merge
14
  - gemma
15
  - mlx
16
  license: apache-2.0
17
  language:
18
  - en
19
  datasets:
20
+ - TeichAI/glm-4.7-2000x
21
+ pipeline_tag: image-text-to-text
22
  library_name: mlx
23
  ---
24
 
25
+ # gemma-3-12b-it-vl-GLM-4.7-Flash-Heretic-Uncensored-Thinking-qx86-hi-mlx
26
 
27
+ Brainwaves
28
+ ```brainwave
29
+ arc arc/e boolq hswag obkqa piqa wino
30
+ qx86-hi 0.585,0.756,0.874,0.724,0.462,0.798,0.717
31
+
32
+ gemma-3-27b-it-heretic
33
+ q8 0.557,0.711,0.868,0.533,0.452,0.706,0.695
34
+ ```
35
+
36
+ -G
37
 
38
  ## Use with mlx
39
 
 
44
  ```python
45
  from mlx_lm import load, generate
46
 
47
+ model, tokenizer = load("gemma-3-12b-it-vl-GLM-4.7-Flash-Heretic-Uncensored-Thinking-qx86-hi-mlx")
48
 
49
  prompt = "hello"
50
 
 
55
  )
56
 
57
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
58
+ ```