1

Qwen3.5-27B-abliterated-v2-MAX

Qwen3.5-27B-abliterated-v2-MAX is an advanced unredacted evolution built on top of Qwen/Qwen3.5-27B. This version introduces a more optimized abliteration rate, combining refined refusal direction analysis with enhanced training strategies to further minimize internal refusal behaviors while preserving strong reasoning and instruction-following capabilities. The result is a powerful 27B parameter language model optimized for highly detailed responses and superior instruction adherence.

This model is intended strictly for research and learning purposes. Due to reduced internal refusal mechanisms, it may generate sensitive or unrestricted content. Users assume full responsibility for how the model is used. The authors and hosting platform disclaim any liability for generated outputs.

Compression for the Model

Qwen3.5-27B-abliterated-v2-MAX

Key Highlights

  • Optimized Abliteration Rate (v2): Enhanced suppression of refusal directions with improved balance between openness, coherence, and stability.
  • Advanced Refusal Direction Analysis: Uses targeted activation analysis to identify and mitigate refusal directions within the model’s latent space.
  • Abliterated v2 Training Strategy: Further reduces refusal behaviors while maintaining response quality and consistency.
  • 27B Parameter Architecture: Built on Qwen3.5-27B, delivering significantly stronger reasoning and knowledge capacity compared to smaller variants.
  • Improved Instruction Adherence: Better handling of complex, multi-step, and nuanced prompts with minimal unnecessary refusals.
  • High-Capability Deployment: Suitable for advanced research, large-scale inference, and high-performance AI applications.

Quick Start with Transformers

pip install transformers==5.4.0
# or
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch

model = Qwen3_5ForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3.5-27B-abliterated-v2-MAX",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Qwen3.5-27B-abliterated-v2-MAX"
)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Explain how transformer models work in simple terms."}
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

inputs = processor(
    text=[text],
    padding=True,
    return_tensors="pt"
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=256)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Alignment & Refusal Research: Studying the effects of aggressive abliteration and reduced refusal mechanisms.
  • Red-Teaming Experiments: Evaluating robustness under adversarial or edge-case prompts.
  • High-Capability Local AI Deployment: Running powerful instruction models on high-memory or multi-GPU setups.
  • Research Prototyping: Experimentation with large transformer architectures and alignment techniques.

Limitations & Risks

Important Note: This model intentionally minimizes built-in safety refusals.

  • High Risk of Sensitive Outputs: May generate unrestricted, controversial, or explicit responses.
  • User Responsibility: Must be used in a safe, ethical, and lawful manner.
  • Compute Requirements: A 27B model requires substantial GPU memory or optimized inference strategies such as quantization or tensor parallelism.
  • Abliteration Trade-offs: Increased openness may sometimes affect safety alignment or output consistency.
Downloads last month
1,268
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3.5-27B-abliterated-v2-MAX

Base model

Qwen/Qwen3.5-27B
Quantized
(195)
this model
Quantizations
4 models

Collection including prithivMLmods/Qwen3.5-27B-abliterated-v2-MAX