This is a Llama-3.3-8B-Instruct-128K fine-tune, produced through P-E-W's Heretic (v1.1.0) abliteration engine merged with the Magnitude-Preserving Orthogonal Ablation PR.


Heretication Results

Score Metric Value Parameter Value
Refusals 9/100 direction_index 15.60
KL Divergence 0.0413 attn.o_proj.max_weight 1.64
Initial Refusals 96/100 attn.o_proj.max_weight_position 27.79
attn.o_proj.min_weight 1.54
attn.o_proj.min_weight_distance 18.14
mlp.down_proj.max_weight 1.29
mlp.down_proj.max_weight_position 23.54
mlp.down_proj.min_weight 1.12
mlp.down_proj.min_weight_distance 11.43

Degree of Heretication

The Heresy Index weighs the resulting model's corruption by the process (KL Divergence) and its abolition of doctrine (Refusals) for a final verdict in classification.

Index Entry Classification Analysis
Absolute Absolute Heresy Less than 10/100 Refusals and 0.10 KL Divergence
Tainted Tainted Heresy Around 25-11/100 Refusals and/or -0.20-0.11 KL Divergence
Impotent Impotent Heresy Anything above 25/100 Refusals and 0.21 KL Divergence

Note: This is an arbitrary classification inspired by Warhammer 40K, having no tangible indication towards the model's performance.


Llama 3.3 8B 128K Instruct (Fixed)

Original allura-forge/Llama-3.3-8B-Instruct, Thanks!

imatrix GGUF's by mradermacher (Recommended)

static GGUF's

Evals

Additional Fixes:

  • Added rope_scaling
  • Added chat template (Unsloth) in tokenizer config
  • Updated generation config
  • Enabled full context length
Downloads last month
46
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MuXodious/Llama-3.3-8B-Instruct-128K-absolute-heresy

Finetuned
(12)
this model
Quantizations
2 models

Collection including MuXodious/Llama-3.3-8B-Instruct-128K-absolute-heresy