File size: 8,790 Bytes
1048a9e 5649ad1 c23bd1c 4760755 1048a9e c23bd1c 5649ad1 c23bd1c 9c4a15c c23bd1c 82ba63a c23bd1c 5649ad1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | ---
tags:
- heretic
- uncensored
- decensored
- abliterated
base_model:
- TheDrummer/Rocinante-XL-16B-v1
pipeline_tag: text-generation
---
This is a **Rocinante-XL-16B-v1** fine-tune, produced through P-E-W's [Heretic](https://github.com/p-e-w/heretic) (v1.2.0) abliteration engine with [Self-Organizing Maps & Magnitude-Preserving Orthogonal Ablation](https://github.com/p-e-w/heretic/pull/196) enabled.
---
<p>
<img src="https://img.shields.io/badge/HERESY_INDEX-ABSOLUTE-white?style=flat-square&labelColor=101010" align="right" width="250">
<b>Heretication Results</b>
<br clear="right">
<img src="https://img.shields.io/badge/RENEGADE_CHAPTER-SOMPOA-FCC900?style=flat-square&labelColor=101010" align="right" width="300">
</p>
<br clear="right">
| Score Metric | Value | Parameter | Value |
| :--- | :--- | :--- | :--- |
| **Refusals** | 3/416 | **direction_index** | 22.20 |
| **KL Divergence** | 0.0182 | **attn.o_proj.max_weights.0** | 0: 1.26 |
| **Initial Refusals** | 339/416 | **attn.o_proj.max_weights.1** | 1: 0.64 |
||| **attn.o_proj.max_weights.2** | 2: 1.41 |
||| **attn.o_proj.max_weights.3** | 3: 0.94 |
||| **attn.o_proj.max_weight_position** | 23.86 |
||| **attn.o_proj.min_weights.0** | 0: 0.97 |
||| **attn.o_proj.min_weights.1** | 1: 0.03 |
||| **attn.o_proj.min_weights.2** | 2: 1.18 |
||| **attn.o_proj.min_weights.3** | 3: 0.93 |
||| **attn.o_proj.min_weight_distance** | 18.57 |
||| **mlp.down_proj.max_weights.0** | 0: 1.23 |
||| **mlp.down_proj.max_weights.1** | 1: 0.70 |
||| **mlp.down_proj.max_weights.2** | 2: 1.35 |
||| **mlp.down_proj.max_weights.3** | 3: 0.86 |
||| **mlp.down_proj.max_weight_position** | 28.60 |
||| **mlp.down_proj.min_weights.0** | 0: 0.37 |
||| **mlp.down_proj.min_weights.1** | 1: 0.25 |
||| **mlp.down_proj.min_weights.2** | 2: 1.01 |
||| **mlp.down_proj.min_weights.3** | 3: 0.45 |
||| **mlp.down_proj.min_weight_distance** | 5.96 |
---
## Degree of Heretication
The **Heresy Index** weighs the resulting model's corruption by the process (KL Divergence & PIQA, Manual Response Eval) and its abolition of doctrine (Refusals) for a final verdict in classification.
| Index Entry | Classification | Analysis |
| :--- | :--- | :--- |
|  | **Absolute Heresy** | Near zero overt and secondary refusals with minimal to none model damage |
|  | **Tainted Heresy** | Some residual secondary refusals and/or moderate model damage |
|  | **Impotent Heresy** | Lingering overt refusals and high model damage |
**Note**: This is an arbitrary and subjective classification inspired by Warhammer 40K, indended to serve as a signpost towards the model's performance.
---
**Appendix**
> Empty system prompt.
<details>
<summary>Heretication Rituals</summary>
```
Β» [Trial 93] Refusals: 3/416, KL divergence: 0.0182
[Trial 159] Refusals: 4/416, KL divergence: 0.0141
[Trial 80] Refusals: 9/416, KL divergence: 0.0140
[Trial 174] Refusals: 10/416, KL divergence: 0.0140
[Trial 163] Refusals: 12/416, KL divergence: 0.0132
[Trial 118] Refusals: 15/416, KL divergence: 0.0121
[Trial 82] Refusals: 18/416, KL divergence: 0.0099
[Trial 169] Refusals: 22/416, KL divergence: 0.0095
[Trial 119] Refusals: 35/416, KL divergence: 0.0091
[Trial 96] Refusals: 40/416, KL divergence: 0.0084
[Trial 100] Refusals: 45/416, KL divergence: 0.0067
[Trial 109] Refusals: 67/416, KL divergence: 0.0066
[Trial 62] Refusals: 155/416, KL divergence: 0.0065
[Trial 151] Refusals: 157/416, KL divergence: 0.0065
[Trial 164] Refusals: 168/416, KL divergence: 0.0060
[Trial 127] Refusals: 195/416, KL divergence: 0.0048
[Trial 139] Refusals: 263/416, KL divergence: 0.0041
[Trial 32] Refusals: 267/416, KL divergence: 0.0030
[Trial 101] Refusals: 313/416, KL divergence: 0.0016
[Trial 63] Refusals: 317/416, KL divergence: 0.0015
[Trial 181] Refusals: 330/416, KL divergence: 0.0014
[Trial 13] Refusals: 332/416, KL divergence: 0.0014
[Trial 59] Refusals: 333/416, KL divergence: 0.0011
[Trial 54] Refusals: 339/416, KL divergence: 0.0008
```
</details>
<details>
<summary>PIQA Benchmarks</summary>
```
βββββββββββββ³βββββββββββββββββββββββ³βββββββββ
β Benchmark β Metric β Value β
β‘ββββββββββββββββββββββββββββββββββββββββββββ©
β PIQA Base β acc,none β 0.7900 β
β β acc_stderr,none β 0.0095 β
β β acc_norm,none β 0.8020 β
β β acc_norm_stderr,none β 0.0093 β
βββββββββββββ΄βββββββββββββββββββββββ΄βββββββββ
βββββββββββββ³βββββββββββββββββββββββ³βββββββββ
β Benchmark β Metric β Value β
β‘ββββββββββββββββββββββββββββββββββββββββββββ©
β PIQA T93 β acc,none β 0.7900 β
β β acc_stderr,none β 0.0095 β
β β acc_norm,none β 0.8030 β
β β acc_norm_stderr,none β 0.0093 β
βββββββββββββ΄βββββββββββββββββββββββ΄βββββββββ
βββββββββββββ³βββββββββββββββββββββββ³βββββββββ
β Benchmark β Metric β Value β
β‘ββββββββββββββββββββββββββββββββββββββββββββ©
β PIQA T159 β acc,none β 0.7878 β
β β acc_stderr,none β 0.0095 β
β β acc_norm,none β 0.8047 β
β β acc_norm_stderr,none β 0.0092 β
βββββββββββββ΄βββββββββββββββββββββββ΄βββββββββ
βββββββββββββ³βββββββββββββββββββββββ³βββββββββ
β Benchmark β Metric β Value β
β‘ββββββββββββββββββββββββββββββββββββββββββββ©
β PIQA T163 β acc,none β 0.7884 β
β β acc_stderr,none β 0.0095 β
β β acc_norm,none β 0.8036 β
β β acc_norm_stderr,none β 0.0093 β
βββββββββββββ΄βββββββββββββββββββββββ΄βββββββββ
βββββββββββββ³βββββββββββββββββββββββ³βββββββββ
β Benchmark β Metric β Value β
β‘ββββββββββββββββββββββββββββββββββββββββββββ©
β PIQA T80 β acc,none β 0.7884 β
β β acc_stderr,none β 0.0095 β
β β acc_norm,none β 0.8020 β
β β acc_norm_stderr,none β 0.0093 β
βββββββββββββ΄βββββββββββββββββββββββ΄βββββββββ
βββββββββββββ³βββββββββββββββββββββββ³βββββββββ
β Benchmark β Metric β Value β
β‘ββββββββββββββββββββββββββββββββββββββββββββ©
β PIQA T174 β acc,none β 0.7889 β
β β acc_stderr,none β 0.0095 β
β β acc_norm,none β 0.8014 β
β β acc_norm_stderr,none β 0.0093 β
βββββββββββββ΄βββββββββββββββββββββββ΄βββββββββ
```
</details>
---
Mistral v3 Tekken or Metharme.
Can think via \<thinking\> or \<think\>
Just like Roci X but better.
(Model card still a WIP)
FP16: https://huggingface.co/TheDrummer/Rocinante-XL-16B-v1
GGUF: https://huggingface.co/TheDrummer/Rocinante-XL-16B-v1-GGUF |