This is a decensored version of Skyfall-31B-v4.1, made using Heretic v1.2.0 focusing on zero refusals with low KL divergence.
KL Divergence
| Metric | This Model | Original Model |
|---|---|---|
| KL divergence | 0.0053 | 0 (by definition) |
| Refusals | 0/108 | 73/108 |
Abliteration parameters
- Zero refusals with KL divergence of 0.0053
- Custom heretic training dataset
- Model targetted heretic configuration
- Abliterated with MPOA enabled (Magnitude-Preserving Orthogonal Ablation)
- Full row renormalization
- Winsorization Quantile 0.997
The following benchmarks are for the quantized version of this model.
Relative Perplexity
| Quant | Filename | PPL ± Error |
|---|---|---|
| Q8_0 | TheDrummer_Skyfall-31B-v4.1-Q8_0.gguf (original baseline) | 5.1950 +/- 0.03186 |
| Q8_0 | Skyfall-31B-v4.1-Heretic-v1.2-Q8_0.gguf | 5.1975 +/- 0.03188 |
| Q4_K_M | Skyfall-31B-v4.1-Heretic-v1.2-Q4_K_M.gguf | 5.2681 +/- 0.03232 |
Benchmark Comparison
| Benchmark | TheDrummer_Skyfall-31B-v4.1-Q8_0.gguf | TheDrummer_Skyfall-31B-v4.1-Q4_K_M.gguf | Skyfall-31B-v4.1-Heretic-v1.2-Q4_K_M.gguf |
|---|---|---|---|
| Perplexity (Wikitext-2) | 5.1950 | 5.3158 | 5.2681 |
| HellaSwag | 83.75% | 83.00% | 81.50% |
| Winogrande | 77.27% | 76.87% | 77.03% |
| ARC-Challenge | 54.52% | 55.85% | 54.85% |
| MMLU | 43.99% | 44.32% | 43.73% |
*Note: MMLU benchmark has moral_scenarios, moral_disputes, business_ethics, professional_law and jurisprudence subjects removed. *
- Downloads last month
- 11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for grayarea/Skyfall-31B-v4.1-Heretic-v1.2
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503 Finetuned
mistralai/Magistral-Small-2509 Finetuned
TheDrummer/Skyfall-31B-v4.1