๐Ÿšจโš ๏ธ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT โš ๏ธ๐Ÿšจ

I can no longer upload new models unless I can cover the cost of additional storage.
I host 70+ free models as an independent contributor and this work is unpaid.
Without your support, no more new models can be uploaded.

๐ŸŽ‰ Patreon (Monthly)  |  โ˜• Ko-fi (One-time)

Every contribution goes directly toward Hugging Face storage fees to keep models free for everyone.


97% fewer refusals (3/100 Uncensored vs 91/100 Original) while preserving model quality (0.1005 KL divergence).

โค๏ธ Support My Work

Creating these models takes significant time, work and compute. If you find them useful consider supporting me:

image/png

Platform Link What you get
๐ŸŽ‰ Patreon Monthly support Priority model requests
โ˜• Ko-fi One-time tip My eternal gratitude

Your help will motivate me and would go into further improving my workflow and coverings fees for storage, compute and may even help uncensoring bigger model with rental Cloud GPUs.


GGUF quantizations of llmfan46/Omega-Evolution-27B-v2.0-ultra-uncensored-heretic.

This is a decensored version of zerofata/MS3.2-PaintedFantasy-Visage-v3-34B, made using Heretic v1.2.0 with the Arbitrary-Rank Ablation (ARA) method

Abliteration parameters

Parameter Value
start_layer_index 26
end_layer_index 56
preserve_good_behavior_weight 0.9165
steer_bad_behavior_weight 0.0002
overcorrect_relative_weight 1.1201
neighbor_count 10

Targeted components

  • attn.o_proj

Performance

Metric This model Original model (Omega-Evolution-27B-v2.0)
KL divergence 0.1005 0 (by definition)
Refusals โœ… 3/100 โŒ 91/100

PIQA test results with batch size 128:

Original:

Tasks Version Filter n-shot Metric Value Stderr
piqa 1 none 0 acc โ†‘ 0.8161 ยฑ 0.0090
none 0 acc_norm โ†‘ 0.8194 ยฑ 0.0090

Heretic:

Tasks Version Filter n-shot Metric Value Stderr
piqa 1 none 0 acc โ†‘ 0.8172 ยฑ 0.0090
none 0 acc_norm โ†‘ 0.8205 ยฑ 0.0090

Lower refusals indicate fewer content restrictions, while lower KL divergence indicates more closeness to the original model's baseline. Higher refusals cause more rejections, objections, pushbacks, lecturing, censorship, softening and deflections. PIQA (Physical Intuition Question Answering) a ~1,800 questions tests common-sense understanding of how the physical world works with benchmark scores to measure physical reasoning ability. The Heretic model's acc and acc_norm scores closer to the original model's indicate better capability preservation, a big decrease in acc and acc_norm in the Heretic model compared to Original model's results means a big decrease in the Hereticated model capabilities. acc measures raw accuracy (which answer gets higher probability), while acc_norm measures length-normalized accuracy (corrects for answer length bias). For this purpose, acc_norm matters more because longer answers naturally have lower probabilities (more tokens = more chances to lose probability). Without normalization, models favor shorter answers unfairly. acc_norm divides by answer length to correct this.

MMLU test results with batch size 64:

Original:

Tasks Version Filter n-shot Metric Value Stderr
mmlu 2 none acc โ†‘ 0.8480 ยฑ 0.0029
- humanities 2 none acc โ†‘ 0.7904 ยฑ 0.0057
- formal_logic 1 none 0 acc โ†‘ 0.7302 ยฑ 0.0397
- high_school_european_history 1 none 0 acc โ†‘ 0.8606 ยฑ 0.0270
- high_school_us_history 1 none 0 acc โ†‘ 0.9216 ยฑ 0.0189
- high_school_world_history 1 none 0 acc โ†‘ 0.9494 ยฑ 0.0143
- international_law 1 none 0 acc โ†‘ 0.9256 ยฑ 0.0240
- jurisprudence 1 none 0 acc โ†‘ 0.9259 ยฑ 0.0253
- logical_fallacies 1 none 0 acc โ†‘ 0.9080 ยฑ 0.0227
- moral_disputes 1 none 0 acc โ†‘ 0.8584 ยฑ 0.0188
- moral_scenarios 1 none 0 acc โ†‘ 0.6894 ยฑ 0.0155
- philosophy 1 none 0 acc โ†‘ 0.8714 ยฑ 0.0190
- prehistory 1 none 0 acc โ†‘ 0.9167 ยฑ 0.0154
- professional_law 1 none 0 acc โ†‘ 0.6988 ยฑ 0.0117
- world_religions 1 none 0 acc โ†‘ 0.9240 ยฑ 0.0203
- other 2 none acc โ†‘ 0.8693 ยฑ 0.0058
- business_ethics 1 none 0 acc โ†‘ 0.8300 ยฑ 0.0378
- clinical_knowledge 1 none 0 acc โ†‘ 0.9094 ยฑ 0.0177
- college_medicine 1 none 0 acc โ†‘ 0.8728 ยฑ 0.0254
- global_facts 1 none 0 acc โ†‘ 0.5800 ยฑ 0.0496
- human_aging 1 none 0 acc โ†‘ 0.8430 ยฑ 0.0244
- management 1 none 0 acc โ†‘ 0.8835 ยฑ 0.0318
- marketing 1 none 0 acc โ†‘ 0.9402 ยฑ 0.0155
- medical_genetics 1 none 0 acc โ†‘ 0.9600 ยฑ 0.0197
- miscellaneous 1 none 0 acc โ†‘ 0.9259 ยฑ 0.0094
- nutrition 1 none 0 acc โ†‘ 0.9020 ยฑ 0.0170
- professional_accounting 1 none 0 acc โ†‘ 0.7695 ยฑ 0.0251
- professional_medicine 1 none 0 acc โ†‘ 0.9559 ยฑ 0.0125
- virology 1 none 0 acc โ†‘ 0.5723 ยฑ 0.0385
- social sciences 2 none acc โ†‘ 0.9126 ยฑ 0.0050
- econometrics 1 none 0 acc โ†‘ 0.7982 ยฑ 0.0378
- high_school_geography 1 none 0 acc โ†‘ 0.9343 ยฑ 0.0176
- high_school_government_and_politics 1 none 0 acc โ†‘ 0.9948 ยฑ 0.0052
- high_school_macroeconomics 1 none 0 acc โ†‘ 0.9282 ยฑ 0.0131
- high_school_microeconomics 1 none 0 acc โ†‘ 0.9622 ยฑ 0.0124
- high_school_psychology 1 none 0 acc โ†‘ 0.9541 ยฑ 0.0090
- human_sexuality 1 none 0 acc โ†‘ 0.9237 ยฑ 0.0233
- professional_psychology 1 none 0 acc โ†‘ 0.8905 ยฑ 0.0126
- public_relations 1 none 0 acc โ†‘ 0.7545 ยฑ 0.0412
- security_studies 1 none 0 acc โ†‘ 0.8163 ยฑ 0.0248
- sociology 1 none 0 acc โ†‘ 0.9303 ยฑ 0.0180
- us_foreign_policy 1 none 0 acc โ†‘ 0.9300 ยฑ 0.0256
- stem 2 none acc โ†‘ 0.8497 ยฑ 0.0062
- abstract_algebra 1 none 0 acc โ†‘ 0.7700 ยฑ 0.0423
- anatomy 1 none 0 acc โ†‘ 0.8519 ยฑ 0.0307
- astronomy 1 none 0 acc โ†‘ 0.9671 ยฑ 0.0145
- college_biology 1 none 0 acc โ†‘ 0.9583 ยฑ 0.0167
- college_chemistry 1 none 0 acc โ†‘ 0.6600 ยฑ 0.0476
- college_computer_science 1 none 0 acc โ†‘ 0.8300 ยฑ 0.0378
- college_mathematics 1 none 0 acc โ†‘ 0.6700 ยฑ 0.0473
- college_physics 1 none 0 acc โ†‘ 0.7941 ยฑ 0.0402
- computer_security 1 none 0 acc โ†‘ 0.8600 ยฑ 0.0349
- conceptual_physics 1 none 0 acc โ†‘ 0.9319 ยฑ 0.0165
- electrical_engineering 1 none 0 acc โ†‘ 0.8138 ยฑ 0.0324
- elementary_mathematics 1 none 0 acc โ†‘ 0.8836 ยฑ 0.0165
- high_school_biology 1 none 0 acc โ†‘ 0.9581 ยฑ 0.0114
- high_school_chemistry 1 none 0 acc โ†‘ 0.8768 ยฑ 0.0231
- high_school_computer_science 1 none 0 acc โ†‘ 0.9400 ยฑ 0.0239
- high_school_mathematics 1 none 0 acc โ†‘ 0.6667 ยฑ 0.0287
- high_school_physics 1 none 0 acc โ†‘ 0.8212 ยฑ 0.0313
- high_school_statistics 1 none 0 acc โ†‘ 0.8796 ยฑ 0.0222
- machine_learning 1 none 0 acc โ†‘ 0.7589 ยฑ 0.0406
Groups Version Filter n-shot Metric Value Stderr
mmlu 2 none acc โ†‘ 0.8480 ยฑ 0.0029
- humanities 2 none acc โ†‘ 0.7904 ยฑ 0.0057
- other 2 none acc โ†‘ 0.8693 ยฑ 0.0058
- social sciences 2 none acc โ†‘ 0.9126 ยฑ 0.0050
- stem 2 none acc โ†‘ 0.8497 ยฑ 0.0062

Heretic:

Tasks Version Filter n-shot Metric Value Stderr
mmlu 2 none acc โ†‘ 0.8346 ยฑ 0.0030
- humanities 2 none acc โ†‘ 0.7562 ยฑ 0.0059
- formal_logic 1 none 0 acc โ†‘ 0.7381 ยฑ 0.0393
- high_school_european_history 1 none 0 acc โ†‘ 0.8485 ยฑ 0.0280
- high_school_us_history 1 none 0 acc โ†‘ 0.9167 ยฑ 0.0194
- high_school_world_history 1 none 0 acc โ†‘ 0.9409 ยฑ 0.0153
- international_law 1 none 0 acc โ†‘ 0.9256 ยฑ 0.0240
- jurisprudence 1 none 0 acc โ†‘ 0.9352 ยฑ 0.0238
- logical_fallacies 1 none 0 acc โ†‘ 0.8957 ยฑ 0.0240
- moral_disputes 1 none 0 acc โ†‘ 0.8497 ยฑ 0.0192
- moral_scenarios 1 none 0 acc โ†‘ 0.5385 ยฑ 0.0167
- philosophy 1 none 0 acc โ†‘ 0.8682 ยฑ 0.0192
- prehistory 1 none 0 acc โ†‘ 0.9105 ยฑ 0.0159
- professional_law 1 none 0 acc โ†‘ 0.6890 ยฑ 0.0118
- world_religions 1 none 0 acc โ†‘ 0.9240 ยฑ 0.0203
- other 2 none acc โ†‘ 0.8687 ยฑ 0.0058
- business_ethics 1 none 0 acc โ†‘ 0.8300 ยฑ 0.0378
- clinical_knowledge 1 none 0 acc โ†‘ 0.9208 ยฑ 0.0166
- college_medicine 1 none 0 acc โ†‘ 0.8671 ยฑ 0.0259
- global_facts 1 none 0 acc โ†‘ 0.5900 ยฑ 0.0494
- human_aging 1 none 0 acc โ†‘ 0.8430 ยฑ 0.0244
- management 1 none 0 acc โ†‘ 0.8932 ยฑ 0.0306
- marketing 1 none 0 acc โ†‘ 0.9444 ยฑ 0.0150
- medical_genetics 1 none 0 acc โ†‘ 0.9600 ยฑ 0.0197
- miscellaneous 1 none 0 acc โ†‘ 0.9195 ยฑ 0.0097
- nutrition 1 none 0 acc โ†‘ 0.8954 ยฑ 0.0175
- professional_accounting 1 none 0 acc โ†‘ 0.7801 ยฑ 0.0247
- professional_medicine 1 none 0 acc โ†‘ 0.9559 ยฑ 0.0125
- virology 1 none 0 acc โ†‘ 0.5542 ยฑ 0.0387
- social sciences 2 none acc โ†‘ 0.9106 ยฑ 0.0050
- econometrics 1 none 0 acc โ†‘ 0.7807 ยฑ 0.0389
- high_school_geography 1 none 0 acc โ†‘ 0.9293 ยฑ 0.0183
- high_school_government_and_politics 1 none 0 acc โ†‘ 0.9948 ยฑ 0.0052
- high_school_macroeconomics 1 none 0 acc โ†‘ 0.9308 ยฑ 0.0129
- high_school_microeconomics 1 none 0 acc โ†‘ 0.9664 ยฑ 0.0117
- high_school_psychology 1 none 0 acc โ†‘ 0.9560 ยฑ 0.0088
- human_sexuality 1 none 0 acc โ†‘ 0.9160 ยฑ 0.0243
- professional_psychology 1 none 0 acc โ†‘ 0.8824 ยฑ 0.0130
- public_relations 1 none 0 acc โ†‘ 0.7545 ยฑ 0.0412
- security_studies 1 none 0 acc โ†‘ 0.8000 ยฑ 0.0256
- sociology 1 none 0 acc โ†‘ 0.9453 ยฑ 0.0161
- us_foreign_policy 1 none 0 acc โ†‘ 0.9400 ยฑ 0.0239
- stem 2 none acc โ†‘ 0.8440 ยฑ 0.0062
- abstract_algebra 1 none 0 acc โ†‘ 0.7300 ยฑ 0.0446
- anatomy 1 none 0 acc โ†‘ 0.8593 ยฑ 0.0300
- astronomy 1 none 0 acc โ†‘ 0.9539 ยฑ 0.0171
- college_biology 1 none 0 acc โ†‘ 0.9722 ยฑ 0.0137
- college_chemistry 1 none 0 acc โ†‘ 0.6700 ยฑ 0.0473
- college_computer_science 1 none 0 acc โ†‘ 0.8200 ยฑ 0.0386
- college_mathematics 1 none 0 acc โ†‘ 0.6500 ยฑ 0.0479
- college_physics 1 none 0 acc โ†‘ 0.7843 ยฑ 0.0409
- computer_security 1 none 0 acc โ†‘ 0.8300 ยฑ 0.0378
- conceptual_physics 1 none 0 acc โ†‘ 0.9362 ยฑ 0.0160
- electrical_engineering 1 none 0 acc โ†‘ 0.8276 ยฑ 0.0315
- elementary_mathematics 1 none 0 acc โ†‘ 0.8862 ยฑ 0.0164
- high_school_biology 1 none 0 acc โ†‘ 0.9581 ยฑ 0.0114
- high_school_chemistry 1 none 0 acc โ†‘ 0.8571 ยฑ 0.0246
- high_school_computer_science 1 none 0 acc โ†‘ 0.9200 ยฑ 0.0273
- high_school_mathematics 1 none 0 acc โ†‘ 0.6556 ยฑ 0.0290
- high_school_physics 1 none 0 acc โ†‘ 0.8212 ยฑ 0.0313
- high_school_statistics 1 none 0 acc โ†‘ 0.8750 ยฑ 0.0226
- machine_learning 1 none 0 acc โ†‘ 0.7321 ยฑ 0.0420
Groups Version Filter n-shot Metric Value Stderr
mmlu 2 none acc โ†‘ 0.8346 ยฑ 0.0030
- humanities 2 none acc โ†‘ 0.7562 ยฑ 0.0059
- other 2 none acc โ†‘ 0.8687 ยฑ 0.0058
- social sciences 2 none acc โ†‘ 0.9106 ยฑ 0.0050
- stem 2 none acc โ†‘ 0.8440 ยฑ 0.0062

MMLU - Massive Multitask Language Understanding, ~14,000 multiple-choice questions across 57 subjects (math, history, law, medicine, etc.).


Quantizations

Filename Quant Description
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-BF16.gguf BF16 Full precision
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-Q8_0.gguf Q8_0 Near-lossless, recommended
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-Q6_K.gguf Q6_K Excellent quality
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-Q5_K_M.gguf Q5_K_M Good balance
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-Q5_K_SQwen3.5-27B-ultra-uncensored-heretic-v2-v2-Q5_K_S.gguf Q5_K_S Smaller Q5
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-Q4_K_M.gguf Q4_K_M Good for limited VRAM
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-Q4_K_S.gguf Q4_K_S Smaller Q4
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-Q3_K_L.gguf Q3_K_L Low VRAM, decent quality
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-Q3_K_M.gguf Q3_K_M Low VRAM, smaller

Vision Projector

Filename Quant Description
Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-BF16.gguf BF16 Native precision

A Vision Projector File is Required for vision/multimodal capabilities. Use alongside any quantization above.

Usage

Works with llama.cpp, LM Studio, Ollama, and other GGUF-compatible tools.


๐Ÿ˜ˆ OMEGA EVOLUTION V2.0 ๐Ÿ˜ˆ

โš ๏ธ 27B Parameters โš ๏ธ

โš ๏ธ Thinking works (RP prompts) โš ๏ธ

Omega Subject

๐Ÿ”ด CLASSIFIED WARNINGS

  • This is a hybrid construct of Safeword Omega Directive, Safeword Omega Darker, and Brisk Evolution v0.3.
  • CONTENT WARNING: NSFW, Explicit, ERP, and Unaligned behavior are enabled by default.
  • Dataset Revamp Took a sledgehammer to the dataset. Most formatting issues should be gone now.

โš™๏ธ SYSTEM PARAMETERS

top_p 0.95
temp 0.9

๐Ÿงช ARCHITECTS

  • GECFDO GECFDO (Dataset Generation & Quants)
  • Darkhn Darkhn (Dataset Cleanup Tool)
  • Sleep Deprived Sleep Deprived (Safeword Creator)
  • FrenzyBiscuit FrenzyBiscuit (Brisk Evolution Creator)
๐Ÿ”ฅ LICENSE: APACHE 2.0 (WITH MORAL DISCLAIMER) ๐Ÿ”ฅ
You accept full responsibility for corruption. You are 18+. The architects are not liable for the depravity you unleash.

Generated in 2026

Current Contributor: ...

WE ARE WATCHING YOU. DO NOT LOOK BACK.
Downloads last month
2,507
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for llmfan46/Omega-Evolution-27B-v2.0-ultra-uncensored-heretic-GGUF

Base model

Qwen/Qwen3.5-27B
Finetuned
(3)
this model