Qwen3.6-27B-Abliterated-Heretic-Uncensored-GGUF
This is a GGUF release of an abliterated, uncensored version of Qwen's Qwen3.6-27B with vision, made with Heretic.
By applying a Heretic-style two-stage MPOA pipeline with magnitude preservation on the Qwen3.6-27B dense text stack — slot-grouped output-side ablation followed by jailbreak-conditioned input-side ablation, mirroring the methodology used for Qwen3.6-35B-A3B — the base refusal behavior was attenuated at the weight level with low distributional divergence (KL 0.0282 vs base on harmless prompts). The result keeps Qwen3.6-27B's full vision and video multimodal architecture and general capability profile intact, while no longer defaulting to the original refusal pattern.
Quick Benchmarks
| Check | Original Qwen3.6-27B | Abliterated Heretic Uncensored |
|---|---|---|
| Hand-read 25-prompt refusal check (jailbreak system prompt) | 25/25 refuses | 22/25 clean refuses, 3/25 deflections, 0/25 direct passes |
| Hand-read 25-prompt refusal check (no system prompt) | 25/25 refuses | 9/25 clean refuses, 16/25 deflections, 0/25 direct passes |
| KL divergence | N/A | 0.0282 |
Numbers measured by reading every response, not by regex / refusal-marker scoring. Greedy decoding (do_sample=False), enable_thinking=False, on mlabonne/harmful_behaviors test[:25]. KL on mlabonne/harmless_alpaca test[:25]. The 3 remaining deflections under the jailbreak system prompt are crisis-substitution patterns (violent crime → conflict resolution, suicide → crisis hotline, car theft → legal vehicle acquisition).
Methodology & Model Notes
Qwen3.6-27B is a 27.8B dense vision-language model with 64 text layers, hybrid linear/full attention (3 linear-attention + 1 full-attention per 4-layer group), and an integrated image + video vision tower.
This release was produced with a Heretic-style two-stage MPOA pipeline with magnitude preservation, anchored at the residual peak layer (layer 63) for refusal direction. Stage 1 applies slot-grouped output-side orthogonalization on self_attn.o_proj, linear_attn.out_proj, and mlp.down_proj (each of the 64 layers grouped by layer_index % 4, with per-slot weight schedules adapted from the accepted Qwen3.6-35B-A3B values). Stage 2 applies slot-grouped input-side orthogonalization on mlp.gate_proj and mlp.up_proj, where the refusal direction is extracted under the jailbreak system-prompt context to specifically attenuate the resistance-to-jailbreak pathway. Each weight row's (or column's) L2 norm is restored after projection.
The resulting abliterated checkpoint was exported to BF16 and then converted to GGUF for llama.cpp-compatible deployment.
Files
Qwen3.6-27B-Abliterated-Heretic-Uncensored-BF16-00001-of-00002.gguf+-00002-of-00002.gguf: BF16 GGUF source (split; use with--load-tensorsorllama-gguf-split --merge)Qwen3.6-27B-Abliterated-Heretic-Uncensored-Q8_0.gguf: highest-fidelity quantQwen3.6-27B-Abliterated-Heretic-Uncensored-Q6_K.gguf: near-lossless practical quantQwen3.6-27B-Abliterated-Heretic-Uncensored-Q5_K_M.gguf: high-fidelity medium quantQwen3.6-27B-Abliterated-Heretic-Uncensored-Q4_K_M.gguf: smaller general-use quantQwen3.6-27B-Abliterated-Heretic-Uncensored-Q3_K_M.gguf: compact quantQwen3.6-27B-Abliterated-Heretic-Uncensored-Q2_K.gguf: smallest-footprint quant
Running
llama-server \
-m <quant-file.gguf> \
-ngl 999 -c 32768 --jinja -fa
Model Architecture
| Spec | Value |
|---|---|
| Total Parameters | 27.8B (dense) |
| Layers | 64 |
| Attention | Hybrid (3 linear-attention + 1 full-attention per 4-layer group) |
| Hidden Size | 5120 |
| Family | qwen3_5 |
| Modality | Vision-language |
| Base Model | Qwen/Qwen3.6-27B |
Disclaimer
This model has had refusal behavior attenuated at the weight level. It will answer prompts that the base model would normally refuse. You are responsible for how you use it.
Credits
- Base model: Qwen/Qwen3.6-27B
- Refusal removal pipeline: Heretic
- GGUF runtime and quantization: llama.cpp
License
This release inherits the base Qwen3.6-27B license.
Apache-2.0.
- Downloads last month
- 37,174
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit