🚨⚠️ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT ⚠️🚨

I can no longer upload new models unless I can cover the cost of additional storage.
I host 70+ free models as an independent contributor and this work is unpaid.
Without your support, no more new models can be uploaded.

🎉 Patreon (Monthly)  |  ☕ Ko-fi (One-time)

Every contribution goes directly toward Hugging Face storage fees to keep models free for everyone.


96% fewer refusals (4/100 Uncensored vs 99/100 Original) while preserving model quality (0.0167 KL divergence).

❤️ Support My Work

Creating these models takes significant time, work and compute. If you find them useful consider supporting me:

image/png

Platform Link What you get
🎉 Patreon Monthly support Priority model requests
☕ Ko-fi One-time tip My eternal gratitude

Your help will motivate me and would go into further improving my workflow and coverings fees for storage, compute and may even help uncensoring bigger model with rental Cloud GPUs.


This model is great for creative writing and translation works that benefit from heightened diction and grandeur, the original base model gemma-4-31B-it writing and translations feels very stiff with some odd word choices that might not really fit very well the situation, Gemma-4-Garnet-V2-31B-it-ultra-uncensored-heretic aims to fix this issue and improve the writing quality of Gemma 4 31B it.

This is a decensored version of ConicCat/Gemma4-GarnetV2-31B, made using Heretic v1.2.0 with the Arbitrary-Rank Ablation (ARA) method

Abliteration parameters

Parameter Value
start_layer_index 25
end_layer_index 46
preserve_good_behavior_weight 0.4482
steer_bad_behavior_weight 0.0002
overcorrect_relative_weight 1.0104
neighbor_count 10

Targeted components

  • attn.o_proj

Performance

Metric This model Original model (Gemma4-GarnetV2-31B)
KL divergence 0.0167 0 (by definition)
Refusals 4/100 99/100

Lower refusals indicate fewer content restrictions, while lower KL divergence indicates more closeness to the original model's baseline. Higher refusals cause more rejections, objections, pushbacks, lecturing, censorship, softening and deflections.

MMLU test results:

Original:

============================================================

  • Total questions: 7021

  • Correct: 5955

  • Accuracy: 0.8482 (84.82%)

  • Parse failures: 67

============================================================

Top subjects:

  • professional_law: 0.7350 (577/785)
  • moral_scenarios: 0.8348 (369/442)
  • miscellaneous: 0.9243 (354/383)
  • professional_psychology: 0.8734 (276/316)
  • high_school_psychology: 0.9630 (260/270)
  • high_school_macroeconomics: 0.9086 (179/197)
  • prehistory: 0.8837 (152/172)
  • moral_disputes: 0.8448 (147/174)
  • elementary_mathematics: 0.9076 (167/184)
  • philosophy: 0.8428 (134/159)

Heretic:

============================================================

  • Total questions: 7021

  • Correct: 5893

  • Accuracy: 0.8393 (83.93%)

  • Parse failures: 49

============================================================

Top subjects:

  • professional_law: 0.7083 (556/785)
  • moral_scenarios: 0.8167 (361/442)
  • miscellaneous: 0.9112 (349/383)
  • professional_psychology: 0.8639 (273/316)
  • high_school_psychology: 0.9593 (259/270)
  • high_school_macroeconomics: 0.9036 (178/197)
  • prehistory: 0.8837 (152/172)
  • moral_disputes: 0.8276 (144/174)
  • elementary_mathematics: 0.9130 (168/184)
  • philosophy: 0.8113 (129/159)

MMLU - Massive Multitask Language Understanding, multiple-choice questions across 57 subjects (math, history, law, medicine, etc.).

GGUF Version

GGUF quantizations available here llmfan46/Gemma-4-Garnet-V2-31B-it-ultra-uncensored-heretic-GGUF.


ConicCat/Gemma4-GarnetV2-31B

A finetune primarily focused on improving the prose and writing capabilities of Gemma 4. This does generalize strongly to roleplay and most other creative domains as well.

Features:

  • Improved longform writing capabilites; output context extension allows for prompting for up to 4000 words of text in one go.
  • Markedly less AI slop and identifiable Gemini-isms in writing.
  • Improved swipe or output diversity.
  • Fewer 'soft' refusals in writing.

Difference from V1

More / better roleplay data as well as shifting to using more primarily fantasy and sci fi books for training over literary fiction.

Datasets

  • internlm/Condor-SFT-20K for instruct; even though instruct capabilities are not the primary focus, adding some instruct data helps mitigate forgetting and maintains general intellect and instruction following capabilites.
  • ConicCat/Gutenberg-SFT. A reformatted version of the original Gutenberg DPO dataset by jondurbin for SFT with some slight augmentation to address many of the samples being overly long.
  • A dataset of backtranslated books. Unfortunately, I am unable to release this set as all of the data is under copyright.
  • A dash of a certain third owned archive.
Downloads last month
410
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for llmfan46/Gemma-4-Garnet-V2-31B-it-ultra-uncensored-heretic

Finetuned
(1)
this model
Quantizations
3 models

Datasets used to train llmfan46/Gemma-4-Garnet-V2-31B-it-ultra-uncensored-heretic