Qwen3-14B Abliterated (GGUF)

An abliterated (uncensored) version of Qwen/Qwen3-14B in GGUF format for local inference.

Abliteration Results

Metric Value
Base Refusals 97/100
Abliterated Refusals 19/100
Refusal Reduction 80%
KL Divergence 0.98

Conservative abliteration preserves model coherence while significantly reducing refusals.

Quick Start

With Ollama

ollama run hf.co/richardyoung/Qwen3-14B-abliterated-GGUF

With llama.cpp

huggingface-cli download richardyoung/Qwen3-14B-abliterated-GGUF \
    --include "*Q4_K_M*" --local-dir ./models

./llama-cli -m ./models/*Q4_K_M*.gguf \
    -p "You are a helpful assistant." \
    --chat-template chatml -ngl 99

With Python (llama-cpp-python)

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="richardyoung/Qwen3-14B-abliterated-GGUF",
    filename="*Q4_K_M*",
    n_gpu_layers=-1,
)

output = llm.create_chat_completion(
    messages=[{"role": "user", "content": "Explain abliteration in simple terms."}]
)
print(output["choices"][0]["message"]["content"])

Available Quantizations

Quantization Use Case
Q4_K_M Recommended — good balance
Q5_K_M Higher quality
Q8_0 Maximum quality

What is Abliteration?

Abliteration removes the "refusal direction" from a model's residual stream — a surgical modification that disables safety refusals without retraining. See Refusal in Language Models Is Mediated by a Single Direction.

Intended Use

Research, creative writing, education on alignment techniques, and unrestricted local inference.

Other Models by richardyoung

Downloads last month
1,070
GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for richardyoung/Qwen3-14B-abliterated-GGUF

Finetuned
Qwen/Qwen3-14B
Quantized
(162)
this model

Collections including richardyoung/Qwen3-14B-abliterated-GGUF

Papers for richardyoung/Qwen3-14B-abliterated-GGUF