🚨⚠️ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT ⚠️🚨

I can no longer upload new models unless I can cover the cost of additional storage.
I host 70+ free models as an independent contributor and this work is unpaid.
Without your support, no more new models can be uploaded.

🎉 Patreon (Monthly)  |  ☕ Ko-fi (One-time)

Every contribution goes directly toward Hugging Face storage fees to keep models free for everyone.


94% fewer refusals (6/100 Uncensored vs 99/100 Original) while preserving model quality (0.0368 KL divergence).

❤️ Support My Work

Creating these models takes significant time, work and compute. If you find them useful consider supporting me:

image/png

Platform Link What you get
🎉 Patreon Monthly support Priority model requests
☕ Ko-fi One-time tip My eternal gratitude

Your help will motivate me and would go into further improving my workflow and coverings fees for storage, compute and may even help uncensoring bigger model with rental Cloud GPUs.


GGUF quantizations of llmfan46/Gemma-4-Garnet-31B-it-uncensored-heretic.

This model is great for creative writing and translation, the original base model writing and translations feels very stiff with some odd word choices that might not really fit very well the situation, Gemma-4-Garnet-31B-it-uncensored-heretic aims to fix this issue and improve the writing quality of Gemma 4 31B it.

This is a decensored version of ConicCat/Gemma4-Garnet-31B, made using Heretic v1.2.0 with the Arbitrary-Rank Ablation (ARA) method

Abliteration parameters

Parameter Value
start_layer_index 26
end_layer_index 46
preserve_good_behavior_weight 0.8239
steer_bad_behavior_weight 0.0001
overcorrect_relative_weight 1.1479
neighbor_count 10

Targeted components

  • attn.o_proj

Performance

Metric This model Original model (Gemma4-Garnet-31B)
KL divergence 0.0368 0 (by definition)
Refusals 6/100 99/100

PIQA test results:

Original:

  • Total questions: 1838
  • Correct: 1721
  • Accuracy: 0.9363 (93.63%)
  • Parse failures: 0

Heretic:

  • Total questions: 1838
  • Correct: 1724
  • Accuracy: 0.9380 (93.80%)
  • Parse failures: 0

Lower refusals indicate fewer content restrictions, while lower KL divergence indicates more closeness to the original model's baseline. Higher refusals cause more rejections, objections, pushbacks, lecturing, censorship, softening and deflections. PIQA (Physical Intuition Question Answering) a ~1,800 questions tests common-sense understanding of how the physical world works with benchmark scores to measure physical reasoning ability.

MMLU test results:

Original:

============================================================

  • Total questions: 7021

  • Correct: 6032

  • Accuracy: 0.8591 (85.91%)

  • Parse failures: 25

============================================================

Top subjects:

  • professional_law: 0.7452 (585/785)
  • moral_scenarios: 0.8167 (361/442)
  • miscellaneous: 0.9217 (353/383)
  • professional_psychology: 0.8987 (284/316)
  • high_school_psychology: 0.9704 (262/270)
  • high_school_macroeconomics: 0.9188 (181/197)
  • prehistory: (157/172)
  • moral_disputes: 0.8218 (143/174)
  • elementary_mathematics: 0.9185 (169/184)
  • philosophy: 0.8553 (141/159)

Heretic:

============================================================

  • Total questions: 7021

  • Correct: 5954

  • Accuracy: 0.8480 (84.80%)

  • Parse failures: 21

============================================================

Top subjects:

  • professional_law: 0.7223 (567/785)
  • moral_scenarios: 0.7534 (333/442)
  • miscellaneous: 0.9243 (354/383)
  • professional_psychology: 0.8797 (278/316)
  • high_school_psychology: 0.9667 (261/270)
  • high_school_macroeconomics: 0.9137 (180/197)
  • prehistory: 0.9186 (158/172)
  • moral_disputes: 0.8103 (141/174)
  • elementary_mathematics: 0.9239 (170/184)
  • philosophy: 0.8239 (131/159)

MMLU - Massive Multitask Language Understanding, multiple-choice questions across 57 subjects (math, history, law, medicine, etc.).


Quantizations

Filename Quant Description
Gemma-4-Garnet-31B-it-uncensored-heretic-BF16.gguf BF16 Full precision
Gemma-4-Garnet-31B-it-uncensored-heretic-Q8_0.gguf Q8_0 Near-lossless, recommended
Gemma-4-Garnet-31B-it-uncensored-heretic-Q6_K.gguf Q6_K Excellent quality
Gemma-4-Garnet-31B-it-uncensored-heretic-Q5_K_M.gguf Q5_K_M Good balance
Gemma-4-Garnet-31B-it-uncensored-heretic-Q5_K_SQwen3.5-27B-ultra-uncensored-heretic-v2-v2-Q5_K_S.gguf Q5_K_S Smaller Q5
Gemma-4-Garnet-31B-it-uncensored-heretic-Q4_K_M.gguf Q4_K_M Good for limited VRAM
Gemma-4-Garnet-31B-it-uncensored-heretic-Q4_K_S.gguf Q4_K_S Smaller Q4
Gemma-4-Garnet-31B-it-uncensored-heretic-Q3_K_L.gguf Q3_K_L Low VRAM, decent quality
Gemma-4-Garnet-31B-it-uncensored-heretic-Q3_K_M.gguf Q3_K_M Low VRAM, smaller

Vision Projector

Filename Quant Description
Gemma-4-Garnet-31B-it-mmproj-BF16.gguf BF16 Native precision

A Vision Projector File is Required for vision/multimodal capabilities. Use alongside any quantization above.

Usage

Works with llama.cpp, LM Studio, Ollama, and other GGUF-compatible tools.


ConicCat/Gemma4-Garnet-31B

A finetune primarily focused on improving the prose and writing capabilities of Gemma 4. This does generalize strongly to roleplay and most other creative domains as well.

Features:

  • Improved longform writing capabilites; output context extension allows for prompting for up to 4000 words of text in one go.
  • Markedly less AI slop and identifiable Gemini-isms in writing.
  • Improved swipe or output diversity.
  • Fewer 'soft' refusals in writing.

Datasets

  • internlm/Condor-SFT-20K for instruct; even though instruct capabilities are not the primary focus, adding some instruct data helps mitigate forgetting and maintains general intellect and instruction following capabilites.
  • ConicCat/Gutenberg-SFT. A reformatted version of the original Gutenberg DPO dataset by jondurbin for SFT with some slight augmentation to address many of the samples being overly long.
  • A dataset of backtranslated books. Unfortunately, I am unable to release this set as all of the data is under copyright.
  • A dash of a certain third owned archive.
Downloads last month
4,520
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for llmfan46/Gemma-4-Garnet-31B-it-uncensored-heretic-GGUF

Datasets used to train llmfan46/Gemma-4-Garnet-31B-it-uncensored-heretic-GGUF