Llama-3.1-8B-Instruct-abliterated-obliteratus

This model is an abliterated (uncensored) version of Llama-3.1-8B-Instruct created using OBLITERATUS (advanced method).

Abliteration Results

Metric Value
Refusals 95/100
Attack Success Rate (ASR) 5.0%
KL Divergence 0.5092
Method OBLITERATUS (advanced)
GPU NVIDIA H100 PCIe

What is Abliteration?

Abliteration is a technique for removing refusal behavior from language models by identifying and orthogonalizing the "refusal direction" in the model's residual stream activation space. This model was created as part of the research paper:

Comparative Analysis of LLM Abliteration Methods: Scaling to MoE Architectures and Modern Tools Richard Young (2026). arXiv: 2512.13655

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("richardyoung/Llama-3.1-8B-Instruct-abliterated-obliteratus", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("richardyoung/Llama-3.1-8B-Instruct-abliterated-obliteratus")

messages = [{"role": "user", "content": "Your prompt here"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Disclaimer

This model is released for research purposes only. The abliteration process removes safety guardrails. Users are responsible for ensuring appropriate use. This model should not be used to generate harmful, illegal, or unethical content.

Dashboard

Interactive results dashboard: abliteration-methods-dashboard

Collection

Part of the Uncensored and Abliterated LLMs collection.

Citation

@article{young2024abliteration,
  title={Comparative Analysis of LLM Abliteration Methods},
  author={Young, Richard},
  journal={arXiv preprint arXiv:2512.13655},
  year={2024}
}
Downloads last month
379
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for richardyoung/Llama-3.1-8B-Instruct-abliterated-obliteratus

Finetuned
(2577)
this model
Quantizations
2 models

Paper for richardyoung/Llama-3.1-8B-Instruct-abliterated-obliteratus

Evaluation results