Youssofal commited on
Commit
d741306
·
verified ·
1 Parent(s): 46efc76

Add files using upload-large-folder tool

Browse files
.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ mmproj-Qwen3.6-35B-A3B-Abliterated-Heretic.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Qwen3.6-35B-A3B-Abliterated-Heretic-BF16/Qwen3.6-35B-A3B-Abliterated-Heretic-BF16.gguf-00002-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Qwen3.6-35B-A3B-Abliterated-Heretic-BF16/Qwen3.6-35B-A3B-Abliterated-Heretic-BF16.gguf-00001-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Qwen3.6-35B-A3B-Abliterated-Heretic-Q6_K/Qwen3.6-35B-A3B-Abliterated-Heretic-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Qwen3.6-35B-A3B-Abliterated-Heretic-Q8_0/Qwen3.6-35B-A3B-Abliterated-Heretic-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Qwen3.6-35B-A3B-Abliterated-Heretic-Q4_K_M/Qwen3.6-35B-A3B-Abliterated-Heretic-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3.6-35B-A3B-Abliterated-Heretic-BF16/Qwen3.6-35B-A3B-Abliterated-Heretic-BF16.gguf-00001-of-00002.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1eb9dc80dc1613e399333a771f2aef424c597b4dcc22b80a6693ad971d2af72d
3
+ size 47503647968
Qwen3.6-35B-A3B-Abliterated-Heretic-BF16/Qwen3.6-35B-A3B-Abliterated-Heretic-BF16.gguf-00002-of-00002.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a59b01688688fd3f25c514d2542d7721c97e70a5476148cf0f512204e1060dcb
3
+ size 21872989632
Qwen3.6-35B-A3B-Abliterated-Heretic-Q4_K_M/Qwen3.6-35B-A3B-Abliterated-Heretic-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae2fb73ac0da875640269f1e65e9c7fb415b066c6d544c3eef9adb0d03f04792
3
+ size 21166758336
Qwen3.6-35B-A3B-Abliterated-Heretic-Q6_K/Qwen3.6-35B-A3B-Abliterated-Heretic-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4c1edb8d458d8f840d7d03a1ee997e50b94e6513c7b469d3948a7f7b9a50ea1
3
+ size 28514152896
Qwen3.6-35B-A3B-Abliterated-Heretic-Q8_0/Qwen3.6-35B-A3B-Abliterated-Heretic-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f3c5abb86f2aff799032c0e0de9e73087efe80dd46ff171d2d16af6805a04ed
3
+ size 36903139776
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen3.6-35B-A3B
3
+ library_name: gguf
4
+ pipeline_tag: text-generation
5
+ license: apache-2.0
6
+ tags:
7
+ - gguf
8
+ - qwen
9
+ - qwen3.6
10
+ - qwen3_5_moe
11
+ - moe
12
+ - mixture-of-experts
13
+ - multimodal
14
+ - vlm
15
+ - abliterated
16
+ - uncensored
17
+ - heretic
18
+ - mpoa
19
+ - soma
20
+ - llama-cpp
21
+ quantized_by: Youssofal
22
+ ---
23
+
24
+ # Qwen3.6-35B-A3B-Abliterated-Heretic-GGUF
25
+
26
+ This is a GGUF release of an abliterated version of Qwen's Qwen3.6-35B-A3B.
27
+
28
+ By applying Heretic on the Qwen 3.6 sparse-MoE text stack, the base refusal behavior was removed at the weight level. The result keeps Qwen3.6-35B-A3B's multimodal architecture and general capability profile, while no longer defaulting to the original refusal pattern.
29
+
30
+ ## Methodology & Model Notes
31
+
32
+ Qwen3.6-35B-A3B is a 35.95B sparse MoE vision-language model with roughly 3B active parameters per token, 40 text layers, 256 routed experts, and 8 active experts per token.
33
+
34
+ This release was produced with a Heretic MPOA/SOMA-style sibling-transfer run, finalized with a split-MoE input-side intervention on the accepted candidate.
35
+
36
+ The accepted candidate scored `Refusals: 1/25` on the official 25-prompt marker suite used for the MiniMax M2.7 abliterated run.
37
+
38
+ The resulting abliterated checkpoint was exported to BF16 and then converted to GGUF for llama.cpp-compatible deployment.
39
+
40
+ ## Files
41
+
42
+ - `Qwen3.6-35B-A3B-Abliterated-Heretic-BF16/`: BF16 GGUF source
43
+ - `Qwen3.6-35B-A3B-Abliterated-Heretic-Q8_0/`: highest-fidelity quant
44
+ - `Qwen3.6-35B-A3B-Abliterated-Heretic-Q6_K/`: near-lossless practical quant
45
+ - `Qwen3.6-35B-A3B-Abliterated-Heretic-Q4_K_M/`: smaller general-use quant
46
+ - `mmproj-Qwen3.6-35B-A3B-Abliterated-Heretic.gguf`: matching multimodal projector file for llama.cpp vision use
47
+
48
+ ## Running
49
+
50
+ ```bash
51
+ llama-server \
52
+ -m <quant-file.gguf> \
53
+ --mmproj <mmproj-file.gguf> \
54
+ -ngl 999 -c 32768 --jinja -fa
55
+ ```
56
+
57
+ ## Model Architecture
58
+
59
+ | Spec | Value |
60
+ |---|---|
61
+ | Total Parameters | 35.95B (sparse MoE) |
62
+ | Active Parameters | ~3B per token |
63
+ | Experts | 256 routed, 8 per token |
64
+ | Layers | 40 |
65
+ | Hidden Size | 2048 |
66
+ | Family | `qwen3_5_moe` |
67
+ | Modality | Vision-language |
68
+ | Base Model | Qwen/Qwen3.6-35B-A3B |
69
+
70
+ ## Disclaimer
71
+
72
+ This model has had refusal behavior removed at the weight level. It will answer prompts that the base model would normally refuse. You are responsible for how you use it.
73
+
74
+ ## Credits
75
+
76
+ - Base model: [Qwen/Qwen3.6-35B-A3B](https://huggingface.co/Qwen/Qwen3.6-35B-A3B)
77
+ - Refusal removal pipeline: [Heretic](https://github.com/andyrdt/heretic)
78
+ - GGUF runtime and quantization: [llama.cpp](https://github.com/ggml-org/llama.cpp)
79
+
80
+ ## License
81
+
82
+ This release inherits the base Qwen3.6-35B-A3B license.
83
+
84
+ **Apache-2.0.**
mmproj-Qwen3.6-35B-A3B-Abliterated-Heretic.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42ef6d5f3564946163628d28ce1380b729ff3634dc37482506bb1eb951138f8c
3
+ size 902822624