Huihui-Qwen3.5-27B-abliterated-MLX-4bit

This repository contains an MLX-optimized 4-bit quantized conversion of huihui-ai/Huihui-Qwen3.5-27B-abliterated for Apple Silicon inference.

The model was converted for mlx-lm text generation workflows.

Model details

  • Base model: huihui-ai/Huihui-Qwen3.5-27B-abliterated
  • Architecture family: Qwen3.5
  • Format: MLX safetensors shards
  • Quantization: affine 4-bit, group size 64
  • Observed average precision: ~4.501 bits/weight
  • Approximate on-disk size: ~12 GB

What is included

  • model-00001-of-00003.safetensors
  • model-00002-of-00003.safetensors
  • model-00003-of-00003.safetensors
  • model.safetensors.index.json
  • config.json
  • tokenizer.json
  • tokenizer_config.json
  • generation_config.json
  • chat_template.jinja

Usage with MLX

Install dependencies:

pip install -U mlx-lm

Quick generation:

mlx_lm.generate \
  --model dotwee/Huihui-Qwen3.5-27B-abliterated-MLX-4bit \
  --prompt "Write one short sentence about MLX." \
  --max-tokens 128

Chat mode:

mlx_lm.chat --model dotwee/Huihui-Qwen3.5-27B-abliterated-MLX-4bit

Notes and limitations

  • This artifact is intended for text generation with mlx-lm.
  • The upstream checkpoint is described as uncensored/abliterated and may produce sensitive, controversial, or unsafe outputs.
  • Use only in contexts where strict output review and moderation are possible.
  • For production or public-facing deployments, add downstream safety controls.

License

This release follows the upstream Qwen3.5 license and terms: apache-2.0 (see license_link in model metadata).

Provenance

  • Upstream model: huihui-ai/Huihui-Qwen3.5-27B-abliterated
  • Converted with: mlx_lm convert
  • Conversion settings: --quantize --q-bits 4 --q-group-size 64
Downloads last month
252
Safetensors
Model size
27B params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for dotwee/Huihui-Qwen3.5-27B-abliterated-MLX-4bit

Base model

Qwen/Qwen3.5-27B
Quantized
(17)
this model