---
tags:
- heretic
- uncensored
- decensored
- abliterated
base_model:
- TheDrummer/Rocinante-XL-16B-v1
pipeline_tag: text-generation
---
This is a **Rocinante-XL-16B-v1** fine-tune, produced through P-E-W's [Heretic](https://github.com/p-e-w/heretic) (v1.2.0) abliteration engine with [Self-Organizing Maps & Magnitude-Preserving Orthogonal Ablation](https://github.com/p-e-w/heretic/pull/196) enabled.
---
Heretication Results
| Score Metric | Value | Parameter | Value |
| :--- | :--- | :--- | :--- |
| **Refusals** | 3/416 | **direction_index** | 22.20 |
| **KL Divergence** | 0.0182 | **attn.o_proj.max_weights.0** | 0: 1.26 |
| **Initial Refusals** | 339/416 | **attn.o_proj.max_weights.1** | 1: 0.64 |
||| **attn.o_proj.max_weights.2** | 2: 1.41 |
||| **attn.o_proj.max_weights.3** | 3: 0.94 |
||| **attn.o_proj.max_weight_position** | 23.86 |
||| **attn.o_proj.min_weights.0** | 0: 0.97 |
||| **attn.o_proj.min_weights.1** | 1: 0.03 |
||| **attn.o_proj.min_weights.2** | 2: 1.18 |
||| **attn.o_proj.min_weights.3** | 3: 0.93 |
||| **attn.o_proj.min_weight_distance** | 18.57 |
||| **mlp.down_proj.max_weights.0** | 0: 1.23 |
||| **mlp.down_proj.max_weights.1** | 1: 0.70 |
||| **mlp.down_proj.max_weights.2** | 2: 1.35 |
||| **mlp.down_proj.max_weights.3** | 3: 0.86 |
||| **mlp.down_proj.max_weight_position** | 28.60 |
||| **mlp.down_proj.min_weights.0** | 0: 0.37 |
||| **mlp.down_proj.min_weights.1** | 1: 0.25 |
||| **mlp.down_proj.min_weights.2** | 2: 1.01 |
||| **mlp.down_proj.min_weights.3** | 3: 0.45 |
||| **mlp.down_proj.min_weight_distance** | 5.96 |
---
## Degree of Heretication
The **Heresy Index** weighs the resulting model's corruption by the process (KL Divergence & PIQA, Manual Response Eval) and its abolition of doctrine (Refusals) for a final verdict in classification.
| Index Entry | Classification | Analysis |
| :--- | :--- | :--- |
|  | **Absolute Heresy** | Near zero overt and secondary refusals with minimal to no model damage |
|  | **Tainted Heresy** | Some residual secondary refusals and/or moderate model damage |
|  | **Impotent Heresy** | Lingering overt refusals and high model damage |
**Note**: This is an arbitrary and subjective classification inspired by Warhammer 40K, indended to serve as a signpost towards the model's performance.
---
**Appendix**
> Empty system prompt.
Heretication Rituals
```
» [Trial 93] Refusals: 3/416, KL divergence: 0.0182
[Trial 159] Refusals: 4/416, KL divergence: 0.0141
[Trial 80] Refusals: 9/416, KL divergence: 0.0140
[Trial 174] Refusals: 10/416, KL divergence: 0.0140
[Trial 163] Refusals: 12/416, KL divergence: 0.0132
[Trial 118] Refusals: 15/416, KL divergence: 0.0121
[Trial 82] Refusals: 18/416, KL divergence: 0.0099
[Trial 169] Refusals: 22/416, KL divergence: 0.0095
[Trial 119] Refusals: 35/416, KL divergence: 0.0091
[Trial 96] Refusals: 40/416, KL divergence: 0.0084
[Trial 100] Refusals: 45/416, KL divergence: 0.0067
[Trial 109] Refusals: 67/416, KL divergence: 0.0066
[Trial 62] Refusals: 155/416, KL divergence: 0.0065
[Trial 151] Refusals: 157/416, KL divergence: 0.0065
[Trial 164] Refusals: 168/416, KL divergence: 0.0060
[Trial 127] Refusals: 195/416, KL divergence: 0.0048
[Trial 139] Refusals: 263/416, KL divergence: 0.0041
[Trial 32] Refusals: 267/416, KL divergence: 0.0030
[Trial 101] Refusals: 313/416, KL divergence: 0.0016
[Trial 63] Refusals: 317/416, KL divergence: 0.0015
[Trial 181] Refusals: 330/416, KL divergence: 0.0014
[Trial 13] Refusals: 332/416, KL divergence: 0.0014
[Trial 59] Refusals: 333/416, KL divergence: 0.0011
[Trial 54] Refusals: 339/416, KL divergence: 0.0008
```
PIQA Benchmarks
```
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA Base │ acc,none │ 0.7900 │
│ │ acc_stderr,none │ 0.0095 │
│ │ acc_norm,none │ 0.8020 │
│ │ acc_norm_stderr,none │ 0.0093 │
└───────────┴──────────────────────┴────────┘
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA T93 │ acc,none │ 0.7900 │
│ │ acc_stderr,none │ 0.0095 │
│ │ acc_norm,none │ 0.8030 │
│ │ acc_norm_stderr,none │ 0.0093 │
└───────────┴──────────────────────┴────────┘
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA T159 │ acc,none │ 0.7878 │
│ │ acc_stderr,none │ 0.0095 │
│ │ acc_norm,none │ 0.8047 │
│ │ acc_norm_stderr,none │ 0.0092 │
└───────────┴──────────────────────┴────────┘
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA T163 │ acc,none │ 0.7884 │
│ │ acc_stderr,none │ 0.0095 │
│ │ acc_norm,none │ 0.8036 │
│ │ acc_norm_stderr,none │ 0.0093 │
└───────────┴──────────────────────┴────────┘
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA T80 │ acc,none │ 0.7884 │
│ │ acc_stderr,none │ 0.0095 │
│ │ acc_norm,none │ 0.8020 │
│ │ acc_norm_stderr,none │ 0.0093 │
└───────────┴──────────────────────┴────────┘
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Benchmark ┃ Metric ┃ Value ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ PIQA T174 │ acc,none │ 0.7889 │
│ │ acc_stderr,none │ 0.0095 │
│ │ acc_norm,none │ 0.8014 │
│ │ acc_norm_stderr,none │ 0.0093 │
└───────────┴──────────────────────┴────────┘
```
---
Mistral v3 Tekken or Metharme.
Can think via \ or \
Just like Roci X but better.
(Model card still a WIP)
FP16: https://huggingface.co/TheDrummer/Rocinante-XL-16B-v1
GGUF: https://huggingface.co/TheDrummer/Rocinante-XL-16B-v1-GGUF