File size: 1,153 Bytes
93cfcfb
6e9a471
4623e99
 
6e9a471
 
 
4623e99
 
 
 
 
 
6e9a471
 
 
 
 
 
4623e99
 
6e9a471
93cfcfb
 
4623e99
93cfcfb
4623e99
 
 
 
 
 
 
 
 
 
f46a502
93cfcfb
 
 
 
 
 
 
 
 
4623e99
93cfcfb
 
 
 
 
 
 
 
 
 
4623e99
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
base_model:
- DreamFast/gemma-3-12b-it-heretic
- DavidAU/gemma-3-12b-it-vl-GLM-4.7-Flash-Heretic-Uncensored-Thinking
tags:
- text-generation-inference
- transformers
- unsloth
- heretic
- abliterated
- uncensored
- mergekit
- merge
- gemma
- mlx
license: apache-2.0
language:
- en
datasets:
- TeichAI/glm-4.7-2000x
pipeline_tag: image-text-to-text
library_name: mlx
---

# gemma-3-12b-it-vl-GLM-4.7-Flash-Heretic-Uncensored-Thinking-qx86-hi-mlx

Brainwaves
```brainwave
          arc   arc/e boolq hswag obkqa piqa  wino
qx86-hi   0.585,0.756,0.874,0.724,0.462,0.798,0.717

gemma-3-27b-it-heretic
q8        0.557,0.711,0.868,0.533,0.452,0.706,0.695
```

-G

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("gemma-3-12b-it-vl-GLM-4.7-Flash-Heretic-Uncensored-Thinking-qx86-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
```