---
base_model: DJLougen/Ornstein3.6-35B-A3B-RYS
base_model_relation: finetune
tags:
- gguf
- qwen3_5_moe
- mixture-of-experts
- text-generation
- qwen3.6
- rys
- saber
- refusal-ablation
- uncensored
language:
- en
license: apache-2.0
pipeline_tag: text-generation
---

# Ornstein3.6-35B-A3B-RYS-SABER
A fully uncensored version of [DJLougen/Ornstein3.6-35B-A3B-RYS](https://huggingface.co/DJLougen/Ornstein3.6-35B-A3B-RYS), processed with **SABER (Spectral Analysis-Based Entanglement Resolution)** — a novel refusal ablation method that surgically removes refusal behavior while preserving model capabilities.
> **See also:** [Ornstein3.6-35B-A3B](https://huggingface.co/DJLougen/Ornstein3.6-35B-A3B) (base) | [Ornstein3.6-35B-A3B-RYS](https://huggingface.co/DJLougen/Ornstein3.6-35B-A3B-RYS) (RYS layer duplication) | [Ornstein3.6-35B-A3B-RYS-SABER-GGUF](https://huggingface.co/DJLougen/Ornstein3.6-35B-A3B-RYS-SABER-GGUF) (quantized GGUFs)
## Important: this model requires a patched llama.cpp
RYS duplicates one of the middle layers of the Qwen3.5 MoE block, which breaks the hardcoded `full_attention_interval = 4` assumption in stock llama.cpp's Qwen3.5 loader — blk.11 ends up with linear-attention tensors where the loader expects full-attention ones, and the model fails with `check_tensor_dims` errors.
This release ships as **BF16 GGUF re-converted with per-layer `layer_types` baked into the file**, and you must run it with a llama.cpp build that reads that per-layer metadata:
**Patched fork:** https://github.com/DJLougen/llama.cpp (default branch `rys-qwen35`)
The patch is a single commit on top of `ggml-org/llama.cpp@d00685831`. It is backward-compatible — stock (non-RYS) Qwen3.5 GGUFs still load normally.
## Note: thinking model — use the bundled `chat_template.jinja`
This is a Qwen3-Thinking derivative and emits its reasoning inside `...` tags. If you see raw `thinking text` blocks appearing inline in every response from `llama-server` (or any OpenAI-compatible client), you need to apply the Qwen3 thinking chat template that ships in this repo.
```bash
llama-server \
-m Ornstein3.6-35B-A3B-RYS-SABER-BF16.gguf \
--jinja \
--chat-template-file chat_template.jinja \
-ngl 99 -c 8192
```
- `--jinja` enables jinja chat-template parsing.
- `--chat-template-file chat_template.jinja` overrides whatever template is embedded in the GGUF with the correct Qwen3-Thinking one from this repo.
Recent llama.cpp builds default `--reasoning-format` to `deepseek`, which splits `...` out of the `content` field into a separate `reasoning_content` field on the OpenAI-compatible response — so just `--jinja --chat-template-file` is enough. If you're on an older build and still see raw `` blocks in `content`, add `--reasoning-format deepseek` explicitly.
## Support This Work
I'm a PhD student in visual neuroscience at the University of Toronto who also happens to spend way too much time fine-tuning, merging, and quantizing open-weight models on rented H100s and a local DGX Spark. All training compute is self-funded — balancing GPU costs against a student budget. If my uploads have been useful to you, consider buying a PhD student a coffee. It goes a long way toward keeping these experiments running.
**[Support on Ko-fi](https://ko-fi.com/djlougen)**
---
## What is SABER?
SABER is a multi-stage refusal ablation pipeline that goes beyond simple direction removal. Where prior methods (Arditi et al. 2024, Gabliteration) find and remove a single "refusal direction," SABER introduces three key innovations:
1. **Entanglement-aware ablation** — SABER distinguishes between "pure refusal" directions and directions that are entangled with useful capabilities. Pure refusal gets fully removed; entangled components are handled carefully to preserve model quality where blunt methods degrade it.
2. **Principled layer selection** — Rather than targeting layers heuristically, SABER uses statistical analysis to automatically identify the layers where refusal behavior is most concentrated and most cleanly separable from normal operation.
3. **Iterative refinement** — After each ablation pass, SABER re-probes the model to catch dormant refusal circuits that activate to compensate for removed ones. Multiple passes with decaying strength ensure thorough removal without overcorrection.
## SABER Results
| Metric | Value |
|---|---|
| **Selected layers** | 24, 25, 26, 27, 28, 29, 30, 31, 32 |
| **Total directions ablated** | 54 |
| **Iterations to convergence** | 2 |
| **Final residual refusal** | 0% |
| **Capability preservation** | 100% |
The ablation converged in just 2 iterations, removing 54 refusal directions across 9 layers (24-32) in the upper-middle portion of the network. Capability preservation remained at 100% — no measurable degradation in general model quality.
## Model Lineage
```
Qwen 3.6 35B-A3B (base)
└── Ornstein3.6-35B-A3B (DDM-curated reasoning fine-tune, 799 examples)
└── Ornstein3.6-35B-A3B-RYS (layer 10 duplicated via RYS brain scan, +49% reasoning)
└── Ornstein3.6-35B-A3B-RYS-SABER (refusal ablation, this model)
```
## Details
- **Developed by:** DJLougen
- **Architecture:** `Qwen3_5MoeForCausalLM` — Qwen 3.6 MoE with linear + full attention interleaved
- **Parameters:** 34.66B total, ~3B active (256 experts, 8 active per token)
- **Hidden size / layers:** 2048 / 41 (40 original + 1 RYS-duplicated)
- **Context length:** 262,144 tokens
- **Distribution format:** BF16 GGUF (with per-layer `head_count_kv` encoding the RYS pattern)
- **License:** Apache 2.0
## Usage
### Build the patched llama.cpp
```bash
git clone https://github.com/DJLougen/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
```
(Drop `-DGGML_CUDA=ON` for a CPU-only build. The patch touches the GGUF loader and three model forward files; backend selection is independent.)
### Run the BF16 GGUF directly
```bash
# Download the 71 GB BF16 GGUF from this repo
hf download DJLougen/Ornstein3.6-35B-A3B-RYS-SABER \
Ornstein3.6-35B-A3B-RYS-SABER-BF16.gguf \
--local-dir .
./build/bin/llama-cli \
-m Ornstein3.6-35B-A3B-RYS-SABER-BF16.gguf \
-p "Your prompt here" \
-ngl 99 -c 8192
```
For smaller VRAM footprints, grab one of the quantized variants from [Ornstein3.6-35B-A3B-RYS-SABER-GGUF](https://huggingface.co/DJLougen/Ornstein3.6-35B-A3B-RYS-SABER-GGUF) instead.
## Disclaimer
This model has had its refusal training removed. It will comply with requests that the base model would refuse. The user assumes full responsibility for how this model is used. This release is intended for research, creative, and educational purposes.
## License
Apache 2.0 — inherited from the Qwen 3.6 base release.
## Citation / prior art
SABER builds on a line of refusal-direction research, including:
- Arditi et al., [*Refusal in LLMs Is Mediated by a Single Direction*](https://arxiv.org/abs/2406.11717) (NeurIPS 2024)
- Gülmez, [*Gabliteration: Adaptive Multi-Directional Neural Weight Modification*](https://arxiv.org/abs/2512.18901) (2025)
- Prakash et al., [*Beyond I'm Sorry, I Can't: Dissecting Large Language Model Refusal*](https://arxiv.org/abs/2509.09708) (2025) — hydra features
- Siu et al., [*COSMIC: Generalized Refusal Direction Identification in LLM Activations*](https://arxiv.org/abs/2506.00085) (ACL 2025)
- Yeo et al., [*Understanding Refusal in Language Models with Sparse Autoencoders*](https://arxiv.org/abs/2505.23556) (EMNLP 2025)