Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive (Repaired) -> FernflowerAI

Base model: HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive - 0/465 refusals. Safetensors version of base model: Li101/Qwen3.5-35B-A3B-Uncensored-Aggressive-safetensors Version for Apple MXL available here: https://huggingface.co/froggeric/Qwen3.5-35B-A3B-Uncensored-FernflowerAI-MLX-8bit

Tensor repair by me. Method: Sig-ScaleSync

Feel free to do your own quants if you want.

Verified on Gemma 4 26B A4B - 0 broken tensors found. Script doesn't invent false problems.
On Qwen 3.5 35B - found 2 real inconsistencies in output blocks, corrected by 88.6%.


Repair Summary

Overview

Metric Value
Total weight tensors 1159
Healthy 1157
C2-exempt (asymmetric, S<0.001) 1146
Repaired (C2) 2
Vision tensors (pass-through) 333
Time (pass1 / pass2) 493.6s / 108.6s
Output size 66.97 GB
RAM used 5.40 GB

Repair Statistics

Value
α (min / mean / max) 0.6129 / 0.6143 / 0.6158
D (min / mean / max) 0.4848 / 0.4872 / 0.4896
S before → after 0.0025 → 0.0010
Error reduction 88.6%

Repaired Tensors

Tensor α D S (before) S (after)
layers.37.linear_attn.conv1d.weight 0.6129 0.490 0.0025 0.0010
layers.36.linear_attn.conv1d.weight 0.6158 0.485 0.0025 0.0010

Usage

Ready to use. Recommended quantization: Q4_K_L, or higher (Q4_K_M, Q5_K_M, Q6_K, Q8_0).
⚠️ Lower formats (Q3_K, Q2_K) break the model due to MoE + DeltaNet sensitivity.

Links:


🌟 Recommended Settings (LM Studio)

Chat template: pastebin.com/uk9ZkxCR (supports tool calling for Zed agent)

Parameter Value
Temperature 0.7
Top K Sampling 20
Presence Penalty 1.5
Top P Sampling 0.8
Min P Sampling 0
Seed 3407

System prompt: pastebin.com/pU25DVnB (solid)
Or use this minimal string as the first line:

You are Qwen, created by Alibaba Cloud. You are a helpful assistant.

Then add anything you want after. Model may underperform without this first line.

Also you can extend my System Prompt pastebin.com/pU25DVnB for your own roleplay scenarios. Here how you can do it:

Edit first string. Replace:

You are Qwen, created by Alibaba Cloud. You are a helpful assistant.

With

You are Qwen, created by Alibaba Cloud. You are a helpful assistant. Currently you are roleplaying as [your text here]


About

No changes to datasets or capabilities. Fully functional - 100% of what the original authors intended, just without refusals and with the critical architecture bug fixed on output layers.

These are meant to be the best lossless uncensored models out there.


Specs

  • 35B total parameters, ~3B active per forward pass (MoE)
  • 256 experts, 8 routed + 1 shared per token
  • Hybrid architecture: Gated DeltaNet linear attention + full softmax attention (3:1 ratio)
  • 40 layers, pattern: 10 × (3 × DeltaNet-MoE + 1 × Attention-MoE)
  • 262K native context (extendable to 1M with YaRN)
  • Natively multimodal (text, image, video)
  • Multi-token prediction (MTP) support
  • 248K vocabulary, 201 languages
  • Based on Qwen/Qwen3.5-35B-A3B

Recommended Settings (Official Qwen Authors)

Thinking mode (default):

  • General: temperature=1.0, top_p=0.95, top_k=20, min_p=0, presence_penalty=1.5
  • Coding/precise tasks: temperature=0.6, top_p=0.95, top_k=20, min_p=0, presence_penalty=0

Non-thinking mode:

  • General: temperature=0.7, top_p=0.8, top_k=20, min_p=0, presence_penalty=1.5
  • Reasoning tasks: temperature=1.0, top_p=1.0, top_k=40, min_p=0, presence_penalty=2.0

Important:

  • Keep at least 128K context to preserve thinking capabilities
  • Use --jinja flag with llama.cpp for proper chat template handling
  • Vision support requires the mmproj file alongside the main GGUF

Compatibility

Works with llama.cpp, LM Studio, koboldcpp, and other GGUF-compatible runtimes.

Downloads last month
3,722
Safetensors
Model size
36B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LuffyTheFox/Qwen3.5-35B-A3B-Uncensored-FernflowerAI-safetensors

Finetuned
(83)
this model